Important Points to Consider When Choosing a VPS Hosting Package

Tips for Finding the Best VPS Hosting Package

VPS is widely considered the natural progression for companies upgrading from shared hosting. Affordable, high-performance and inherently more secure, it provides exceptionally more resources for just a small increase in price. While VPS might be the right decision, finding the best hosting provider and package needs some consideration. Here are some important tips to help you make the right choice.

What are you going to use VPS for?

VPS hosting comes in a range of packages, each offering differing amounts of resources, such as storage, CPU and RAM. The needs of your business should dictate what resources you’ll need now and in the foreseeable future and this should be a key consideration when looking for a VPS package, otherwise, you might restrict your company’s development further down the line. Here are some of the main things VPS is used for.

Large and busy websites

Busy websites — Siteinspire

The extra storage and processing power offered by VPS makes it ideal for companies with large or multiple websites with heavy traffic. The additional resources enable your website to handle large numbers of simultaneous requests without affecting the speed and performance of your site, ensuring fast loading times, continuity and availability.

Deploy other applications

8 Best Practices for Agile Software Deployment – Stackify

As businesses grow, they tend to deploy more applications for business use. Aside from a website, you may want to utilise applications for remote working, employee tracking, access control or some of the many others which businesses now make use of. Not only does VPS give you the storage and resources to run these workloads; it also gives you the freedom to manage and configure your server in a way that best suits your needs.

Remember, the more apps you use and the more data you store, the bigger the package you’ll require.

Other common uses of VPS

A VPS can be utilised for a wide range of purposes. It can be used for developing apps and testing new environments, private backup solutions, hosting servers for streaming and advertising platforms, indeed, even some individuals use them to host gaming servers so they can play their favourite games with their friends online.

Whichever purposes you have in mind for your VPS, make sure you look carefully at the resources you need now and for growing space in the future.

Latency and location

Network Latency - Comparing the Impact on Your WordPress Site

One issue that many businesses don’t fully consider is the location of their VPS server. This, however, can have an impact in a number of ways. As data has to travel from a server to a user’s machine, the further the two devices are apart, the longer communication takes. This latency can have big implications. Firstly, it can make your website load slowly on more distant browsers. This has been proven to increase the numbers of users who will abandon your website and, consequently, lower conversion rates. Secondly, it slows response times on your site, so when someone carries out an action, there is an unnecessary delay before the expected result occurs (a major issue for gaming servers) and, thirdly, when search engines measure latency times, they may downrank your website because it isn’t fast enough. As a result, your organic traffic can diminish.

Another vital consideration is compliance. To comply with regulations like GDPR, you have to guarantee that the personal data you collect about UK and EU citizens is kept secure. While you can ensure this in countries which are signed up to GDPR, like the UK, the bulk of the world’s servers are hosted in the US where the data on them can be accessed by US law enforcement for national security purposes. In such instances, companies cannot guarantee data privacy and, should the data be accessed, your business could be in breach of regulations.

The tip here is a simple one: for speed, responsiveness, SEO and compliance, ensure your VPS is physically hosted in the country where the vast majority of your users are located. Be careful though: just because your web host operates in your country doesn’t necessarily mean their servers are based there. This is why, at Anteelo, all our datacentres are based in the UK.

Expertise

Want To Make Money From Your Expertise? Start Here. | by Josh Spector | For The Interested | Medium

As your business develops its use of IT, you will start to need more in-house expertise to manage your system and make use of the applications at your disposal. Upgrading to VPS is a critical time for having IT skills in place, as you may need to learn how to use the new platform, migrate your website and other apps to it and deploy any new apps that you want to take advantage of.

IT expertise, however, is in short supply and training can be expensive. Even with it in place, there may be issues that you need help with. This makes it crucial that when choosing a VPS solution, you opt for a vendor that provides 24/7 expert technical support. A good host will not only set up the VPS for you and migrate your website; they will also manage your server so you can focus on your business and be there to deliver professional support whenever it is needed.

Security

Free Vector | Global data security, personal data security, cyber data security online concept illustration, internet security or information privacy & protection.

The proliferation of sophisticated cybercrime together with increased compliance regulations means every business needs to put security high on their priorities. While moving from a shared server to a VPS with its own operating system makes your system inherently safer, you should not overlook the security provided by your hosting provider.

Look for a host that provides robust firewalls with rules customised for VPS, intrusion and anti-DDoS protection, VPNs, anti-malware and application security, SSL certificates, remote backup solutions, email filters and email signing certificates.

Conclusion

VPS hosting offers growing business the ideal opportunity to grow its website, handle more traffic and deploy a wider range of business applications – and all at an affordable price. However, its important to choose a VPS package that offers enough resources, is located close to your customers, is managed on your behalf, comes with 24/7 expert technical support and provides the security your company needs.

The realities of retailers in a COVID-19 world

Covid-19: Pandemic forcing smaller retailers to adopt O2O, social commerce

The Coronavirus outbreak seems to be having an unprecedented effect globally. While brave health workers are confronting the fatal virus head on, business leaders from several service providers are playing an important role in the background. In addition to medicine and healthcare, basic food items are essential. Retailers are working round the clock to ensure that they can deliver essential items in a timely manner. While some retailers are better prepared than the others, most retail management teams are facing a few looming challenges:

Challenges faced by the retailers

2021 Retail Industry Outlook | Deloitte US

  • Logistics disruption caused by city closures due to quarantine process. Employees are unable to come to work due to illness and/or regulatory advisory
  • Supply crunch due to closing of factories. It started with the production break in China. However, the demand for certain categories surged due to panic purchase
  • Lack of E-commerce readiness for retailers.  Many retailers are not e-commerce ready and are not able to deal with the scale where almost all purchases are expected to happen online.

Lessons from the past

Although the Coronavirus outbreak is unprecedented and the impact is on a different scale, we may still learn a few lessons from the recovery pattern of the previous two epidemics and two natural disasters, such as, the SARS outbreak in China in 2003, Hurricane Katrina in 2005, the Fukushima disaster in Japan in 2011 and the MERS outbreak in South Korea in 2015.

The magnitude of the impact of these events were varied, however, there were similarities with respect to the demand fluctuation patterns during and post crisis for the following three distinct class of products:

  • Staples, fresh food, and other essentials – the demand was very high during crisis and slowly stabilized post crisis
  • Healthcare essentials, disinfectant, hand sanitizers, cleaning products – the demand spiked during crisis and dropped sharply post crisis
  • Clothing, cosmetics, toys – these discretionary spends seemed to dip significantly during the crisis and spiked immediately after the crisis, due to the pent-up demand. The stabilization was gradual.

Retailers can take a lead in helping the community face the crisis and, in the process, build a stronger, long-lasting brand. For example, certain retailers deepened their trust with the consumers by rapidly resuming operations, staying at the forefront during the crisis and demonstrating their commitment towards the community. A case in point is the retail brand Costco , Costco has kept their stores open to supplying food and essential resources to the community. Similarly, Walmart has opened their parking lots for Coronavirus test centers. Walmart is working with restaurants and hospitality groups to hire people who are facing layoffs and furloughs. GameStop implemented changes to their retail operations so that they can continue to provide essential products to their customers that allow them to stay connected and enable better experience while working remotely.

Another long-term change in consumers’ preference was observed with respect to their food habits and lifestyle changes. A large segment of the consumers seemed to prefer fresh healthy food at a convenient price point. Thus, food safety, compliance and brand building were important for retailers to earn consumers’ loyalty.

How can retailers be better prepared

Reports indicate that the online sales increased between 200% -650% YOY in China during end of Jan to Mid Feb in grocery, fresh vegetables, and disinfectant categories.

Retailers need to energize the e-commerce arm and be ready to cater to higher demands.

Overcome supply-chain obstacles

5 Obstacles To Avoid When Synchronizing Your Supply Chain

A significant portion of supplies of consumer products come from China. Some factories in China and other South East Asian markets are partially operational or not operational. In addition, there are government regulations imposed on movement and some cities have been quarantined causing huge transportation disruption. Some of this supply chain impact is inescapable. It may help if retailers assess this challenge by category of products and take necessary steps.

For example,

  •  Monitor the inventory situations at zip code/store/fulfillment centers constantly
  •  Redirect inventory to the worst hit locations
  • Impose restrictions on allowable transaction limits
  • Explore additional suppliers and consider reviewing the existing supplier capacity especially the local ones. These considerations may benefit long-term post crisis period as well
  • For consumer products such as electronics items, inform the consumers about the delay. This may help earn consumers’ trust rather than missing promised date. AliExpress did this prudently to manage consumers expectation

Rebuild AI

How AI and Data Science Can Help Rebuild Lost Treasures Like Notre Dame

The trends and patterns have shifted for some of the key business metrics such as sales, demand, inventory, cost, supply and logistics etc. This means that the decision engines and the AI systems have not observed this trend-shift in the current data before. Hence, their effectiveness will soon be impacted. The AI algorithm should be revisited to account for the new trend.

 Overcome technology challenges

Top Technology Challenges Small Businesses need to Overcome

Most retailers believe that they are equipped with adequate technology to deal with sudden surge in volume. They must validate this understanding by analyzing click-steam data, audits/logs, and call center data to identify technology challenges faced by the visitors and put systematic fixes to drive better experience through-out the purchase journey.

 Follow ethical business practices

Turning An Unethical Business Into An Ethical One » The Culture Supplier

The e-commerce marketplaces allow third-party sellers to sell products. Some of these sellers may spike up the price to make money during this crisis. We have read in the news that 2-ounce hand sanitizers from a well-known national brand was being sold for $400 and masks’ prices were up by 10 times. In the process, the consumers build negative sentiment towards the brand and retailers. Retailers and brands should monitor the prices of their products frequently and take necessary actions against the violators.

Additionally, there are duplicate product “counterfeit” in the market presented as the national brands. The description, texts and images look very similar to the original products. It’s been reported that searches for keywords related to Coronavirus, COVID-19 have increased several thousand percent in giant ecommerce brands on the search engines. This is triggering inappropriate listing of product claiming health benefits and misleading consumers. Retailers and Brands must track inappropriate listings and de-list them as soon as possible.

 Hire additional resource

Weighing Internal vs. External Hires

Retailers need to plan of on-boarding additional resources to enable omni-channel functions. There will be operational challenges that can be dealt with by additional resources in the short-term. For example, calling specific consumer segment that is less used to using technology will help the community, and drive online adoption further. Similarly, manual processes can augment short-term systematic delivery or pick-up challenges. Amazon has announced hiring of additional 100,000 employees, while Walmart is in the process of hiring 150,000 associates.

The loyalty and brand value can be re-established during extreme social stress.  Consumers want the essential products whenever they want at a fair price. And they want to trust the retailers they buy from and product brand they choose to buy. While retailers have been in the forefront helping the communities fight this pandemic, they should focus on building a stronger brand that will ensure sustainable growth once the crisis is over.

What to Consider when Choosing a WordPress Hosting Plan

Global WordPress Hosting with OnlyDomains

While dedicated WordPress hosting is available from many service providers, the features offered can vary considerably from host to host. Knowing what to look for can help site owners make informed decisions about which provider to opt for, a choice that can have a significant long-term impact on the success of the website. Here, we discuss the essential features you should look for in a dedicated WordPress hosting solution.

WordPress optimised servers

24 Tips to Speed Up WordPress Performance (UPDATED)

The key difference between a standard hosting package and dedicated WordPress hosting is the server. By only hosting WordPress websites on a server, the service provider is free to configure it to boost the performance of the WordPress platform. This will include optimising the server with features like NGINX cache, PHP7+OPcache and HTTP2. The NGINX caching engine delivers frequently requested website files to users’ browsers in milliseconds, PHP7 together with OPcache enhances performance and the latest HTTP/2 protocol increases website responsiveness.

Another feature to look for is MySQL database management which, when powered by MariaDB, enables the server to be optimised for the read-heavy workloads which occur in WordPress. Users should also have access to tools like PHPMyAdmin, which enable easier management of their MySQL databases.

While PHP 7 currently provides the best performance, some WordPress sites still need to use older versions of PHP to maintain compatibility with legacy versions of WordPress, plugins or themes. The server should be configured to cater for these sites and the hosting package should give you the option to choose a version via a PHP version selector.

Additionally, users should also look for SSD storage, which offers far superior read-write speeds compared to more traditional HDD drives.

Advanced security

5 Web Security Ideas That Will Make Your Business So Much Better – News Anyway

Used to create 35% of the world’s websites, the popularity of WordPress makes it a regular target of cybercriminals. A good web host will know this and ensure that your WordPress hosting comes with first-rate security. This will include a WordPress application firewall that will detect and repel attacks from hackers and bots.

The most advanced service providers offer WordPress security toolkits, accessible on any device, that enable you to manage a wide range of security features from anywhere at any time. These toolkits will scan your website for security issues, let you know if you are following best practice and warn you when you need to carry out updates. For additional ease of use, look for 1-click hardening capabilities that ensure you follow best practice without having to put it into place yourself. This is ideal for those new to WordPress.

Beyond this, look for a hosting solution that provides free SSL certificates to encrypt data sent between your server and a user’s browser and which ensures the secure padlock icon is displayed next to your web address on browsers. Finally, to protect you against data loss, choose a host which carries out daily backups so your data can easily be recovered.

Ease of use features

IBM Takes Cognos to the SMB with Low Price Points, Ease-of-Use Features

Ease of use starts with a user-friendly control panel. Ideally, this should be a fully-featured toolkit that is accessible on all devices, enabling you to manage your websites wherever you are. With this, you should be able to install and manage multiple sites and the installation should be easy, using a one-click installer that can get the task completed in less than 30 seconds. Also, look for a plugin and theme manager that will allow you to install, patch and update themes and plugins directly from within the control panel.

Users should be able to easily carry out a wide range of other tasks from within the control panel. These include enabling automatic patching for WordPress core; creating development environments to test things like new plugins before going live; and cloning sites for use in development and staging and syncing these to production. To prevent major development issues, you should also look for features that enable you to create snapshots which you can restore to should changes not go to plan, putting the site into maintenance mode with a single click and having a debug mode for debugging code. Another helpful feature to look for is the ability to create, schedule and automate tasks using an inbuilt cronjob manager.

Service and support

Application Maintenance and Support at Esprit Solutions

First and foremost, you should expect that your web host provides 24/7 technical support. This will mean that, whenever you have an issue, a WordPress expert will be available to deal with it and help you put things right. You also need to look for guaranteed uptime, storage capacity, unlimited or unmetered bandwidth (data transfer) and the number of websites and mailboxes you can create in your package. If you are moving from another host, finding a web host that offers inbuilt migration management and who will migrate your site for free is a big plus.

Conclusion

WordPress is a unique CMS and has its own particular needs when it comes to hosting. A dedicated WordPress hosting package should fulfil those needs and make it easy for the site to be managed. Hopefully, the information given here will help you find a WordPress hosting solution that is right for you.

Why Is The Cloud the Best Option for Customer Data Management?

Cloud services on the growth path in India- Business News

The more a company understands its customers, the better it will be able to build relationships, enhance the customer experience and deliver accurate, personalised marketing. Today, the tool of choice for providing these insights is a customer data platform (CDP). In this post, we’ll look at the benefits of using CDPs and why, to get the most value from them, they need to be deployed in the cloud.

What is a CDP?

Four Things to Look for in a Data Management Platform

A CDP is a database application that organises and unifies data into a consistent record that can be used by all the company’s systems. In doing so, it provides a comprehensive, all-touchpoint overview of customers, either as individuals or as members of various groups, which is invaluable for the analytics needed to inform decision making. The results offer companies credible, real-time data on their customer’s behaviour which can be used to help personalise marketing, improve customer experience and thus strengthen relationships.

Businesses collect data from a wide range of sources, these include IoT devices, website and mobile app behaviour tracking, purchase histories, emails, live chat interactions and information provided by the customer about their personal circumstances, such as age, gender, occupation, family, income and so forth. Often, much of this information is gathered and stored separately, with access to it limited to individual departments. When data is stored in these silos, no-one in the company has the full picture and this can have a negative impact on any decision making.

The benefit of a CDP is that it can take data from all these sources and unify them, giving all decision-makers the complete perspective they need to develop successful strategies. It allows them to pool personal information with behavioural, attitudinal and engagement data to understand the needs of the individual and discover patterns in customer groups. It can even help discover new groups that hadn’t previously been conceived.

The insights provided by analysing unified data enable the company to develop models that predict how customers’ attitudes and behaviours react to different stimuli, for example, how their shopping habits change at birthdays, how their investments may change if they have children, or how they respond during crises like coronavirus. Having this data enables companies to pre-empt changes in the market, helping them to best meet customers’ changing needs and do so faster than their competitors.

More than this, analysis also provides essential feedback on the decisions which have been made and the strategies which have been implemented, indicating where monies can be saved and where improvements can be made.

The importance of cloud

Understanding the Importance of Cloud Security — Innovative Penetration Testing Services - Lean Security

While using a CDP brings obvious benefits, there are challenges to deploying it effectively. With so much data being collected and analysed today, businesses need increasingly larger data storage and processing capacity. Providing this in-house can be expensive, with companies needing to purchase the necessary high-spec hardware and applications, employ IT staff to manage the system and pay for ongoing overheads like maintenance and power. As more data is collected, additional hardware will be required, all of which will need to be replaced when it becomes obsolete.

A cloud solution eradicates any requirement to purchase hardware and can lessen the cost of software licencing. All the infrastructure required is provided on a pay-as-you-go basis and is managed, maintained and updated by the vendor. This means that when additional resources are needed to undertake large scale analytics, you only pay for them when you use them, making it the most cost-efficient way to undertake the process.

A cloud solution also makes it easier for your IT team to focus on more business-oriented projects as the vendor will provide a managed service, as well as offering 24/7 expert, technical support to help your team deploy and run your system and applications.

Once your cloud-based CPD is deployed, it will then be available over the internet, meaning team members who need access to it can do so from anywhere they have an internet connection. This improves collaboration and allows teams to work remotely, anywhere in the world.

Another factor to consider is that, for many businesses, the internet is the source of most of their customer data, such as from websites, apps, emails, live chat and IoT devices. As most of these touchpoints are cloud-based, it makes sense that the data they gather remains in the cloud as it can be stored in the same data warehouse and thus be better managed and more swiftly processed.

Finally, but also of crucial importance, is that the cloud provides exceptional data security. Data can be backed up continuously, with backups being checked for integrity and being encrypted, ensuring the data is not only secure but can be restored almost instantly should there be a data loss. Access to data can be restricted using logical access while logins can be protected using single sign-on or multifactor authentication protocols. The vendor also provides a wide range of security measures, including firewalls, malware monitoring, intrusion prevention and so forth. All these measures can help ensure companies comply with data protection regulations like GDPR.

Conclusion

A customer data platform provides one of the most useful tools for companies undergoing digital transformation, enabling them to have previously unattainable insights into their customers and the marketplace. To make the best use of this, a company will need significant data storage and processing capacity. Cloud offers the most cost-effective way to provide the infrastructure needed, while also providing scalability, security and IT expertise. For more information about our cloud services, visit https://anteelo.com/.

COVID-19 : Impact on the Hospitality Industry

The COVID-19 pandemic is fast becoming one of the biggest threats to human lives and the global economy. With governments across the world taking preventive measures of quarantine, social distancing and travel bans, hospitality is one of the first industries to be adversely hit. The impact is not just limited to Italy and China anymore but is increasingly visible across the globe resulting in steep decline in booking trends and occupancy rates.

Impact on US Hotels

The United States has seen an exponential growth in the number of COVID-19 cases in the past couple of weeks. The impact on hotels can be seen in the below chart.

Average hotel occupancy rate | Statista

The trends clearly indicate a continuous decline of room occupancy with a steep change over first two weeks of March. As of 2nd week of March, the industry reported a YoY decline of

  • 24.4% in Occupancy
  • 10.7% in Average Daily Rate (ADR) and
  • 32.5% in Revenue per available room (RevPAR).

Based on the research done by CBRE (Coldwell Banker Richard Ellis), below is the depiction of expected trends for US.

US is 2 weeks behind Italy and 8 weeks behind China in terms of being affected by the pandemic

  • impact on market demand on

If China and Italy are any indication on how the pandemic and its subsequent impact spreads, the major disruption is observed within the first 3 months. For the United States hotel industry, this translates to a steep fall in the occupancy rates reaching 25% to 30% in March and 10% to 15% in April.

A study by hotelAVE estimates that 15% to 20% of the hotels in the United States will close temporarily by the end of March because the fixed carry costs are less than the negative cash flow projected from staying open. While this might be true for certain properties, other hotels and properties are taking some temporary measures to reduce operational costs which include

  • Shutting down of floors
  • Cutting down on amenities and services
  • Cutting down and closing of F&B outlets
  • Encouraging hotel staff to go on unpaid vacation

Irrespective of the measure the properties undertake, there is a high risk of 10% properties to have a permanent shut down as they would not be able to sustain this period.

Past Demand Shocks – An Overview:

2001- During the recession of 2001-02, there was a noted decline in RevPAR by an alarming 10.2%

2008 -The period of 2008-09 saw the RevPAR plummet down by a shocking 16.8%.

2002 – The 2002 SARS outbreak saw occupancy rate decline by a 26% in a comparison between the April-June quarter in 2002 and 2003.

2013-2014 – The Ebola crisis saw a 15% decline in room occupancy rate in sub-Saharan Africa between 2013 and 2014.

The recession periods resulted in a slump in the consumer surplus and leisure travels took the hardest hit. In addition, with increased number of lay-offs, people undertaking business travels were also curtailed.

In contrary, during the SARS or the Ebola outbreaks, there was a consumer surplus, but also an aversion to travel because of perceived hygiene and safety issues.

Hotelchamp Blog: How hoteliers can navigate a demand shock during a crisis

How does COVID-19 outbreak compare to the past demand shocks?

  • Vs SARS/Ebola: The first difference is the sheer scale of COVID-19’s impact. Ebola and SARS epidemics were contained in certain geographical areas, but COVID-19, due to its highly contagious nature, has now permeated across the globe.
  • Vs Recession: Recession periods tend to have a time lag between the actual onset and the start of decline in revenue for the hotel and other industries.
    For COVID-19, the impact was instantaneous. In case there is a steady recovery from COVID-19 outbreak, the impact on hotel industry may not linger on for the whole year. But, it must be noted, that with the scale of economic loss across industries (and given that 2020-21 was already susceptible to a recession in the organic cycle of the business), there is a slight chance that as the COVID-19 outbreak is subdued, it may lead to a prolonged recession – which will in turn result in more losses for the hotel industry.

Time of Recovery in Past Demand Shocks:

In case of prior demand shocks like SARS or Financial crisis, there were at least two quarters before an indication of recovery was shown – in the form of positive monthly growth of occupancy rate.

It also must be brought to attention that metrics like Demand and Occupancy recovered earlier than revenue related attributes like ADR or RevPAR, indicating aggressive marketing and discounting of room prices to get traffic into the hotels.

Following table summarizes the time of recovery that was exhibited by prior demand shocks:

COVID-19 – Impact on Hospitality Industry - Tredence

Road to Recovery

Given that the revenue hit almost zero, most hotel businesses are focusing on reducing costs, and pursuing alternate ways of generating revenue to keep the properties up and running – eg: As some countries lift their lockdown with a mandate of 14-day quarantine period on the incoming travelers, hotel chains are offering quarantine zones for these travelers to ensure footfall. Similar avenues can be approached by other hotel chains to make sure that they do have some revenues trickling in through these tough times.

Once the situation returns to normal, we may see some consolidation amongst the hotel chains, especially amongst the boutique hotels. Businesses would be forced to take a hard look at any new investment they make, and they will be looking at ways to optimize cost.

To nudge customers into their travel patterns, we will see a co-opetition, where all the companies in an industry work together to benefit the industry.

We will witness a lot of re-defining of brands through AI and automation– e.g. Employ innovative usage of Internet of Things (IoT) to enable mobile check-in. This can be leveraged to give a zero-contact experience to the guests thus ameliorating their fears over the disease and its transmission.

Post-recovery, the landscape of the hotel industry will go through a massive overhaul. Given that urban hotels have been hit harder than their rural and sub-urban counterparts, we may see some of the urban properties going into a partial or total lockdown to recuperate some of the loss. Smaller hotel chains may be acquired by the hotel industry giants like Marriott, Hilton etc.

Overall, it is predicted that the world economy, despite taking a severe blow, will be back in positive direction in a years’ time, and will take around 3-4 years to completely nullify the losses accrued during this period.

What do you think? We would like to know your opinion.

Developing for Azure autoscaling

Microsoft Azure Review 2020 - business.com

The public cloud (i.e. AWS, Azure, etc.) is often portrayed as a panacea for all that ails on-premises solutions. And along with this “cure-all” impression are a few misconceptions about the benefits of using the public cloud.

One common misconception pertains to autoscaling, the ability to automatically scale up or down the number of compute resources being allocated to an application based on its needs at any given time.  While Azure makes autoscaling much easier in certain configurations, parts of Azure don’t as easily support autoscaling.

For example, if you look at the different application service plans, you will see the lower three tiers (Free, Shared and Basic) do not include support for auto-scaling like the top 4 tiers (Standard and above). And there are ways to design and architect your solution to make use of auto-scaling. The point being, just because your application is running in Azure does not necessarily mean you automatically get autoscaling.

Scale out or scale in

How to Autoscale Azure App Services & Cloud Services

In Azure, you can scale up vertically by changing the size of a VM, but the more popular way Azure scales is to scale-out horizontally by adding more instances. Azure provides horizontal autoscaling via numerous technologies. For example, Azure Cloud Services, the legacy technology, provides autoscaling automatically at the role level. Azure Service Fabric and virtual machines implement autoscaling via virtual machine scale sets. And, as mentioned, Azure App Service has built in autoscaling for certain tiers.

When it is known ahead of time that a certain date or time period (such as Black Friday) will warrant the need for scaling-out horizontally to meet anticipated peak demands, you can create a static scheduled scaling. This is not in the true sense “auto” scaling. Rather, the ability to dynamically and reactively auto-scale is typically based upon runtime metrics that reflect a sudden increase in demand.  Monitoring metrics with compensatory instance adjustment actions when a metric reaches a certain value is a traditional way to dynamically auto-scale.

Tools for autoscaling

Azure Monitor overview - Azure Monitor | Microsoft Docs

Azure Monitor provides that metric monitoring with auto-scale capabilities. Azure Cloud Services, VMs, Service Fabric, and VM scale sets can all leverage Azure Monitor to trigger and manage auto-scaling needs via rules. Typically, these scaling rules are based on related memory, disk and CPU-based metrics.

For applications that require custom autoscaling, it can be done using metrics from Application Insights.  When you create an Azure application and you want to scale it, you should make sure to enable App Insights for proper scaling. You can create a customer metric in code and then set up an autoscale rule using that custom metric via metric source of Application Insights in the portal.

Design considerations for autoscaling

Autoscaling v1 - Azure App Service Environment | Microsoft Docs

When writing an application that you know will be auto-scaled at some point, there are a few base implementation concepts you might want to consider:

  • Use durable storage to store your shared data across instances. That way any instance can access the storage location and you don’t have instance affinity to a storage entity.
  • Seek to use only stateless services. That way you don’t have to make any assumptions on which service instance will access data or handle a message.
  • Realize that different parts of the system have different scaling requirements (which is one of the main motivators behind microservices). You should separate them into smaller discrete and independent units so they can be scaled independently.
  • Avoid any operations or tasks that are long-running. This can be facilitated by decomposing a long-running task into a group of smaller units that can be scaled as needed. You can use what’s called a Pipes and Filters pattern to convert a complex process into units that can be scaled independently.

Scaling/throttling considerations

Autoscaling can be used to keep the provisioned resources matched to user needs at any given time. But while autoscaling can trigger the provisioning of additional resources as needs dictate, this provisioning isn’t immediate. If demand unexpectedly increases quickly, there can be a window where there’s a resource deficit because they cannot be provisioned fast enough.

An alternative strategy to auto-scaling is to allow applications to use resources only up to a limit and then “throttle” them when this limit is reached. Throttling may need to occur when scaling up or down since that’s the period when resources are being allocated (scale up) and released (scale down).

The system should monitor how it’s using resources so that, when usage exceeds the threshold, it can throttle requests from one or more users. This will enable the system to continue functioning and meet any service level agreements (SLAs). You need to consider throttling and scaling together when figuring out your auto-scaling architecture.

Singleton instances

Creating a Logic Apps Singleton Instance – Jeroen Maes' Integration Blog

Of course, auto-scaling won’t do you much good if the problem you are trying to address stems from the fact that your application is based on a single cloud instance. Since there is only one shared instance, a traditional singleton object goes against the positives of the multi-instance high scalability approach of the cloud. Every client uses that same single shared instance and a bottleneck will typically occur. Scalability is thus not good in this case so try to avoid a traditional singleton instance if possible.

But if you do need to have a singleton object, instead create a stateful object using Service Fabric with its state shared across all the different instances.  A singleton object is defined by its single state. So, we can have many instances of the object sharing state between them. Service Fabric maintains the state automatically, so we don’t have to worry about it.

The Service Fabric object type to create is either a stateless web service or a worker service. This works like a worker role in an Azure Cloud Service.

Five major cloud development to watch in 2021

5 Key Cloud Trends For 2021 | Hyperslice Cloud Blog - The latest in Cloud,Tech and Enterprise IT

We rang in 2020 with all the expectations that cloud computing would continue its progression as a massive catalyst for digital transformation throughout the enterprise. What we didn’t expect was a worldwide health crisis that led to a huge jump in cloud usage.

Cloud megadeals have heralded a new era where cloud development is a key driver in how organizations deploy operating models and platforms. In just the past 6 months, we saw 6 years’ worth of changes in the way businesses are run.

There has been a drastic shift to remote work – with the percentage of workers using desktop services in the cloud skyrocketing. Gallup estimates that as of September, nearly 40 percent of full-time employees were working entirely from home, compared to 4 percent before the crisis.

We are seeing renewed interest in workload migration to public cloud or hybrid environments. Gartner forecasts that annual spending on cloud development system infrastructure services will grow from $44 billion in 2019 to $81 billion by 2022.

In light of these underlying changes in the business landscape, here are some key cloud trends that will reshape IT in 2021:

Remote working continues to drive cloud and security services

The Cloud Is The Backbone Of Remote Work

The year 2020 saw a huge expansion of services available in the public cloud to support remote workers. Big improvements in available CPUs means remote workers can have access to high-end graphics capabilities to perform processing-intense tasks such as visual renderings or complex engineering work. And as worker access to corporate networks increases, the cloud serves as an optimal platform for a Zero Trust approach to security, which requires verification of all identities trying to connect to systems before access can be granted.

Latest system-on-a-chip technology will support cloud adoption

Today's System-on-Chip Platforms Have the Power to Drive Edge Computing – Rahi

The introduction of the Apple M1 in November marked the beginning of an era of computing where essentially the whole computer system resides on a single chip, which delivers incredible cost savings.  Apple’s Arm-based system on a chip represents a radical transformation away from traditional Intel x86 architectures. In 2020, Amazon Web Services also launched Graviton2, based on 64-bit Arm architecture, a processer that Amazon says provides up to a 40 percent improvement in price performance for a variety of cloud workloads, compared to current x86-based instances.

These advancements will help enterprises migrate to the cloud more easily with a compelling price-to-performance story to justify the move.

Move to serverless computing will expand cloud skills

7 Reasons Why Serverless Computing will Create a Revolution in Cloud T

More than ever, companies are retooling their operating models to adopt serverless computing and letting cloud providers run the servers, prompting enormous changes in the way they operate, provide security, develop, test and deploy.

As serverless computing grows, IT personnel can quickly learn and apply serverless development techniques, which will help expand cloud skills across the industry. Today’s developers often prefer a serverless environment so they can spend less time provisioning hardware and more time coding. While many enterprises are challenged with the financial management aspects of predicting consumption in a serverless environment, cost optimization services can play a key role in overcoming these risks.

Machine learning and AI workloads will expand in the public cloud

AI storage: Machine learning, deep learning and storage needs

As a broad range of useful tools becomes available, the ability to operate machine learning and AI in a public cloud environment – and integrate them into application development at a low cost – is moving forward very quickly.

For example, highly specialized processors, such as Google’s TPU and AWS Trainium, can manage the unique characteristics of AI and machine learning workloads in the cloud. These chips can dramatically decrease the cost of computing while delivering better performance. Adoption will grow as organizations figure out how to leverage them effectively.

More data will move to the cloud

Moving big data to the cloud - Rick's Cloud

Data gravity is the idea that large masses of data exert a form of gravitational pull within IT systems and attract applications, services and even other data. Public cloud providers invite free data import, but data export carries a charge.

This is prompting enterprises to build architectures that optimize for not paying that egress charge, which means pushing workloads and their data to reside in a single cloud, rather than multicloud environments. Data usage in the cloud can eventually amass enough gravity to increase the cost and consumption of cloud services.

However, as cloud technology continues to mature, organizations should not be afraid to duplicate data in the cloud – that is, it is perfectly fine to have data in different formats in the cloud. A key goal is to have your data optimized for the way you access it, and the cloud allows you to do that.

The journey continues

More sensitive data is moving to the cloud than ever | ITProPortal

So as the world around us changed in unprecedented ways in 2020, cloud computing continued its march to enterprise ubiquity and is a key layer of the Enterprise Technology Stack. Yet, by most measures, cloud is still only something like 5 to 10 percent of IT, so there is still a lot of room to grow. Getting the most out of cloud technology going forward is not a 5-year journey; it may take many years to truly tap into its full potential.

The shift in operational models and the organizational change needed to fully embrace cloud computing is significant. It is not a minor task to undergo that transformation journey. In early 2022, we will most certainly look back and be amazed again at how far cloud has progressed in one year.

How AI-powered ‘voicebots’ can benefit airline employees — and their employers

Emergence of Voicebots - The future of chatbots | KLoBot | AI

Airline workers have it tough.

A new generation of voice-driven software bots promises to make their work easier.

Airline employees, whether pilots, flight attendants or maintenance/repair/overhaul (MRO) technicians, are often called on to perform challenging tasks — and in a hurry. Think of a pilot dealing with mechanical failure, a flight attendant who can’t make a connection due to bad weather or a technician urgently needing a crucial part that’s out of stock.

To help solve these tough challenges in real time, a new generation of “voicebots” leverages two advanced approaches:

  • The first, natural language processing (NLP), lets machines and humans interact using “natural” (that is, human) languages.
  • The second, machine learning (ML), is a subset of AI that empowers computer systems to build mathematical models based on observed patterns.

How to Implement Voice Bots for Better Customer Support

Voicebots eliminate the need to type, click or point. Instead, a worker can simply speak normally, then listen as the voicebot speaks back in response. What’s more, the latest voicebots can actually detect a speaker’s mood – for example, a sense of urgency – and then use that information to prioritize requests, such as ordering a new part.

Voicebots can also deliver important business benefits to the enterprise. For one, they empower airlines to automate tasks formerly done by hand, then expedite them based on priorities detected in a speaker’s voice. This can help airlines ease disruptions and delays, as well as lower costs and reallocate those savings to new and innovative projects. Imagine, for example, an airline that uses voicebots to ensure more efficient maintenance. If it could lower the number of flight delays by just 0.5%, the airline would enjoy total annual cost savings of $4 million to $18 million, depending on the number of daily flights.

By implementing this cutting-edge technology, airlines should also have an easier time attracting and retaining tech-savvy workers, possibly helping to mitigate the labor shortages forecast for the industry.

Voice technology soars — in the air and on the ground

Assisto Front

Voicebots for airline workers are part of the bigger trend of voice technology that consumers are already on board with. For example, market-watcher IDC predicts consumers worldwide will purchase more than 144 million voice-enabled smart speakers this year. The business market is ripening, too. Amazon, Google and Microsoft are all dedicating serious resources to expanding their voice technologies for B2B use.

Some airlines already use AI-powered chatbots to serve their customers. These chatbots can be programmed to understand the intent behind a customer’s request, recall an entire conversation history and respond to requests in a human-like way.

On the enterprise side, aircraft-maker Boeing is among manufacturers investing in AI and other voicebot technologies. The company is conducting research on NLP, speech processing, acoustic modeling, language modeling and speech recognition.

Real-life scenarios

How will airline employees benefit from using voicebots? Here are a few possible applications:

  • Pilots can use voicebots, both during preflight preparations and while actually flying. A complicated command from air-traffic control can take pilots up to 30 seconds to complete, turning all the knobs and hitting all the necessary buttons. A speech-recognition system can cut that time dramatically, allowing pilots to keep their eyes on the traffic and weather, and to keep the airplane safe.

How we used Voice to supercharge customer engagement - MetroGuild

  • MRO technicians can use voicebots to assist maintenance and repairs. A technician needing to replace a specific component could ask a voicebot, “Do we have this part in stock?” If the answer is negative, the bot could then find the nearest location where the part is available and arrange for it to be shipped. The voicebot could even select Express or Standard delivery based on the urgency detected in the mechanic’s voice.

MRO Labour Spotlight: The Aircraft Maintenance Technician Shortage

  • Flight attendants can use voicebots when encountering flight delays, cancellations and other common scheduling changes. For example, a flight attendant who is snowbound in Denver could tell a voicebot, “Notify Dallas that I’m going to miss my connecting flight today. Then find someone who can fill in for me on the next flight.” The airline’s crew-scheduling system could then make the necessary changes in real time.

Will "Flight Attendants" be replaced by AI & Passenger Service Robot?

Getting started

Airlines looking to equip their employees with voicebots may wonder how to begin. We suggest a three-step process:

Step 1: Ideation. Begin by brainstorming. Assemble your team and ask them: What are our biggest disruptions? How could voice technology help?

How to Run an Ideation Session. 15 Minutes | by Shay Namdarian | Medium

Step 2Proof of concept. With your biggest disruptions in mind, develop a potential solution using voicebots.

Five tips for a successful Proof of Concept - Bizztracker

Step 3: MVP. Borrow a tactic from the Agile approach — create a minimum viable product. This does not need to be a perfect, complete piece of software. Instead, create just enough for early tests and feedback. Then repeat as needed.

Making sense of MVP (Minimum Viable Product) – Digital Nimbler

Airlines looking to employ voicebots will also need to take on one more challenge: data access. Voicebots need quick access to all enterprise data. Yet many airlines keep their data protected in silos, mainly for security reasons. For voicebots, that makes gaining access to this data difficult and slow.

To resolve this issue, airlines need to find an acceptable balance between data security on the one hand and speedy voicebot data access on the other. This could be hard. But the alternative — doing nothing — could is even worse. Any airline that doesn’t adopt voicebots can be sure the competition will.

Cloud Adoption : Major Challenges

How to transition to a cloud-based analytics environment

Since the start of the pandemic, digital transformation has accelerated as more businesses see the need to adopt advanced technologies and do so quickly. Providing ways to propel businesses forward, adapt to new ways of working and cut-costs, digital transformation has many benefits. Cloud adoption, while a necessary element of that transformation, is not without its challenges. Before migration takes place, companies need to know what the main challenges are. Here, we explain.

Security in the cloud

What is cloud security? | Kaspersky

Cloud services, in themselves, are exceptionally secure. All cloud providers have to comply with stringent regulations and this requires them to put robust security measures in place, including the use of strict protocols and advanced security tools. However, companies still have concerns about multi-tenancy and data location.

Multi-tenancy can be a compliance issue for some organisations which hold sensitive data. The problem can be overcome by storing the data in a single-tenancy private cloud where they have dedicated use of the underlying hardware.

Data location is an issue for organisations which store data protected by regulations such as GDPR. Using a cloud provider that migrates data or backups between countries, puts the data at risk of being kept in a nation that doesn’t comply with those regulations. For example, EU citizen data is protected by GDPR, however, if it is stored on servers in the US, the government there has legal access to it for national security purposes. If it is accessed, the organisation will be in breach of compliance. The easy solution here is to opt for a cloud provider which locates all its datacentres in a single country, as Anteelo does in the UK.

Cost management

What Does a Successful Cost Management Program Look Like? | Clarizen

One of the biggest advantages of the cloud is the ability to reduce capital expenditure on hardware and in-house datacentres. The other financial advantage is that cloud resources are chargeable on a pay per use basis, enabling companies to scale up and down quickly so that costs can be minimised.

The financial risks here depend on how well a company manages its use of the cloud. Poorly managed, it is easy for the use of these on-demand resources to spiral and this can be costly. Companies need to implement use policies, monitor cloud usage and carefully analyse where the money is being spent.

Lack of IT expertise

Is the hybrid cloud's biggest challenge a lack of expertise? | CIO

Migration to the cloud not only presents a new type of infrastructure to an organisation; it also puts a host of new technologies at their disposal. While the benefits of using these are the prime reason for cloud adoption, one of the challenges faced by most companies is developing the expertise to make use of them.

Organisations adopting the cloud need a clear understanding of what they want to use it for and make sure they have the necessary expertise to help them meet their objectives. This could require the training of current staff or the recruitment of new ones.

Thankfully, many providers offer managed services and 24/7 technical support. There is also a wide range of tools which automate many of the tasks which not so long ago required expert manual input.

Multi-clouds and hybrid clouds

Pros of a Multi-Cloud Strategy – Xorlogics

Over 80% of companies now use more than one cloud provider, some as many as five, to carry out different workloads. The reasons for this are numerous, but it boils down to choosing the most appropriate vendor for the specific workload being undertaken. At the same time, there is an increasing number of businesses developing hybrid-clouds, a mixture of public and private clouds together with dedicated servers.

While multi-cloud and hybrid cloud can be beneficial for financial, operational and compliance purposes, they add to the complexity of an organisation’s overall infrastructure. Here, there will be a greater need for governance, monitoring, expertise and security.

Migration

Step-by-Step Cloud Migration Checklist | by Manisha Mishra | DataDrivenInvestor

While the points above discuss the challenges of cloud adoption, the migration itself can also cause problems. A cloud environment can be markedly different from the one on which an application is hosted in-house. Issues with operating system compatibility and system configuration may mean an application might not work, or work as expected, in a cloud environment. Resolving these issues can have an impact on the speed of migration, project deadlines and budgets.

Thankfully, there are a wide and growing range of applications, many of them open-source, that have been developed for cloud environments, are quickly deployable and work straight out of the box.

The key to a smooth and speedy migration, however, is to find a vendor with the expertise and technical support to help you manage the migration process.

Conclusion

The pandemic has accelerated the pace of digital transformation across the globe with unprecedented numbers of companies migrating to and expanding workloads in the cloud. While for many organisations, this is a necessary part of the ‘new normal’, they should not underestimate the challenges that cloud adoption presents. The best way to prevent issues is to work closely with a cloud provider that will get to know your company and put tailored solutions in place for you.

ML Works with CI, CD, and CM

Continuous Delivery for ML Models | by Alvaro Fernando Lara | Onfido Tech | Medium

Most of us are familiar with Continuous Integration (CI) and Continuous Deployment (CD) which are core parts of MLOps/DevOps processes. However, Continuous Monitoring (CM) may be the most overlooked part of the MLOps process, especially when you are dealing with machine learning models.

CI, CD and CM, together, are an integral part of an end-to-end ML model management framework, which not only helps customers to streamline their data science projects, but to also get full value out of their analytics investments. This blog focuses on the Continuous Monitoring aspect of MLOps and gives an overview of how Anteelo is using ML Works, a model monitoring accelerator built on Databricks’ platform, to help customers build a robust model management framework.

Here are a few examples of MLOps customer personas:

ML

1.Business Org – A business team, which sponsors an analytics project will have the expectation that machine learning models are running in the background, helping them to get valuable insights from their data. However, these ML models are mostly in a black box and in a lot of cases, the business sponsors are not even sure if the analytics project will lead to a good ROI.

Data harvesting: what is it, and how can you benefit?

2.IT/Data Org – A company’s internal IT team, which supports business teams usually has a team of data engineers and data scientists who build ML pipelines. Their core mandate is to build the best ML model and migrate them to production. When doing so, they’re either too busy building the next best ML model to put it into production or managing production model support is not the right use of their time. Hence, there is a lack of streamlined model monitoring process in production and IT, and data leaders are left wondering how to support their business partners.

ML

3.Support Org – A company has an IT support organization, which takes care of supporting all IT issues. This team likely treats all issues the same, including similar SLAs, and may not differentiate between supporting an ML model and a Java web application. Hence, a generic support team may not have the right skills to support ML models and may not be able to meet the expectations of their internal customers

A well-designed MLOps framework will address the challenges of all three personas.

Anteelo not only has multiple experiences in end-to end MLOps implementations across tech stacks but has also built MLOps accelerators to help customers gain the full potential of their analytics investments.

Let’s drill down on our model monitoring accelerator in the Continuous Monitoring (CM) space and talk about the offer in more detail.

Model monitoring is not easy!

Unlike monitoring a BI dashboard or an ETL pipeline, the biggest challenge with ML models is that their results are probabilistic in nature and have their own dependencies like training data, hyper parameters, model drift, and the ability to explain the output of the model results. As a result, complications increase, and model monitoring becomes almost impossible when models are built on unstructured notebook formats that are used across multiple data science teams. This severely impacts Support SLAs and results in business users gradually losing confidence in the model’s predictions.

ML Works to the rescue

ML Works is our model monitoring accelerator built on Databricks’ unified data analytics platform to augment our MLOps offerings. After evaluating multiple architectural options, we decided to build ML Works on Databricks to leverage Databricks’ offerings like Managed MLflow and Delta Lake. ML Works is trained on thousands of models and can handle Enterprise scale model monitoring, or it can be used for automated monitoring within a small team of data scientists and analysts. Here is an overview of ML Works core offerings:

What is a Workflow Diagram: Definition and 3+ Examples - Tallyfy

1.Workflow Graph – Monitoring a ML pipeline along with its relevant data engineering tasks can be a daunting task for a support engineer. ML Works uses Databricks’ managed ML flow framework to build a visual end-to-end workflow monitor for easy and efficient model monitoring. This helps support engineers troubleshoot production issues and narrow down the root cause faster, significantly reducing Support SLAs.

machine learning model monitoring dashboard cheap online

2.Persona-based Monitoring – We understand that a ML model monitoring process should not only make the life of a support engineer easier but also help other relevant persons like business users, data scientists, ML engineers and data engineers to get visibility into their respective ML model metrics. Hence, we have built a persona-based monitoring journey using Databricks’ managed ML flow to make the model monitoring process easy for all personas.

GitHub - quantumjot/BayesianTracker: Bayesian multi-object tracking

3.Lineage Tracker – Picking up the task of debugging someone else’s ML code is not a pleasant experience, especially when there isn’t good documentation. Our Lineage Tracker uses Databricks’ managed ML flow and helps customers start from a dashboard metric and drill all the way to the base ML model, including the model’s hyper parameter values, training data, etc. thus giving full visibility into every model’s operations. This gets all relevant details about a model in one place, which improves model traceability. This feature is further enhanced when we use Delta Lake’s Time Travel functionality to create snapshots of training data

3.Drift Analyzer – Monitoring the model’s accuracy with time is critical for business users to gain trust in the insights. Unfortunately, a model’s accuracy will drift with time for various reasons including production data changing over time; business requirements changing and making original features no longer relevant and acquiring a new business which introduces new data sources and new patterns in the data. Our Drift Analyzer analyzes the Data Drift and Concept Drift automatically by reviewing the data distributions, which triggers alerts if the drift has exceeded a threshold and ensures that production models are continuously monitored for accuracy and relevance.

Using ML Works, business teams are able to monitor and track their relevant metrics on the Persona Dashboard and use Drift Analyzer to understand the impact of model degradation on metrics. This will help them to look at the underlying ML models as a white box solution. Lineage Tracking helps data engineers and data scientists obtain end-to-end visibility into ML models and their relevant data pipelines, which streamlines development cycles by taking care of the dependencies.

Support teams can use Workflow Graph and relevant metrics to troubleshoot production issues faster, significantly reducing Support SLAs. And finally, customers can now get full value from their analytics investments using ML Works, while also ensuring that ML deployments in production really work.

error: Content is protected !!