Knowing how to use Azure Databricks and resource groupings

6 Reasons to Use Azure Databricks Today

Azure Databricks, an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud, is a highly effective open-source tool, but it automatically creates resource groups and workspaces and protects them with a system-level lock, all of which can be confusing and frustrating unless you understand how and why.

The Databricks platform provides an interactive workspace that streamlines collaboration between data scientists, data engineers and business analysts. The Spark analytics engine supports machine learning and large-scale distributed data processing, combining many aspects of big data analysis all in one process.

Spark works on large volumes of data either in batch (rest) or streaming processing (live) mode. The live processing capability is how Databricks/Spark differs from Hadoop (which uses MapReduce algorithms to process only batch data).

Resource groups are key to managing the resources bound to Databricks. Typically, you specify which groups in which your resources are created. This changes slightly when you create an Azure Databricks service instance and specify a new or existing resource group. Say, for example, we are creating a new resource group, Azure will create the group and place a workspace within it. That workspace is an instance of the Azure Databricks service.

Along with the directly specified resource group, it will also create a second resource group. This is called a “Managed resource group” and it starts with the word “databricks.” This Azure-managed group of resources allows Azure to provide Databricks as a managed service. Initially this managed resource group will contain only a few workspace resources (a virtual network, a security group and a storage account). Later, when you create a cluster, the associated resources for that cluster will be linked to this managed resource group.

The “databricks-xxx” resource group is locked when it is created since the resources in this group provide the Databricks service to the user. You are not able to directly delete the locked group nor directly delete the system-owned lock for that group. The only option is to delete the service, which in turn deletes the infrastructure lock.

A beginner's guide to Azure Databricks

With respect to Azure tagging, the lock placed upon that Databricks managed resource group prevents you from adding any custom tags, from deleting any of the resources or doing any write operations on a managed resource group resource.

Example Deployment

Let’s talk a look at what happens when you create an instance of the Azure Databricks service with respect to resources and resource groups:

Steps

  1. Create an instance of the Azure Databricks service
  2. Specify the name of the workspace (here we used nwoekcmdbworkspace)
  3. Specify to create a new resource group (here we used nwoekcmdbrg) or choose an existing one
  4. Hit Create

Results

  1. Creates nwoekcmdbrg resource group
  2. Automatically creates nwoekcmdbworkspace, which is the Azure Databricks Service. This is contained within the nwoekcmdbrg resource group.
  3. Automatically creates databricks-rg-nwoekcmdbworkspace-c3krtklkhw7km resource group. This contains a single storage account, a network security group and a virtual network.

 

Click on the workspace (Azure Databricks service), and it brings up the workspace with a “Launch Workspace” button.

Per-workspace URLs - Azure Databricks - Workspace | Microsoft Docs

Launching the workspace uses AAD to sign you into the Azure Databricks service. This is where you can create a Databrick cluster or run queries, import data, create a table, or create a notebook to start querying, visualizing and modifying your data. I decided to create a new cluster to demonstrate where the resources are stored for the appliance. Here, we create a cluster to see where the resources land.

Azure Databricks - create new workspace and cluster | SQL Player

After the cluster is created, a number of resources were created in the Azure Databrick managed resource group databricks-rg-nwoekcmdbworkspace-c3krtklkhw7km. Instead of merely containing a single VNet, NSG and storage account as it did initially, it now contains multiple VMs, disks, network interfaces, and public IP addresses.

Quickstart - Run a Spark job on Azure Databricks Workspace using Azure portal | Microsoft Docs

The workspace nwoekcmdbworkspace and the original resource group nwoekcmdbrg both remain unchanged as all changes are made in the managed resource group databricks-rg-nwoekcmdbworkspace-c3krtklkhw7km. If you click on “Locks,” you can see there is a read-only lock placed on it to prevent deletion. Clicking on the “Delete” button yields an error saying the lock was not able to be deleted. If you make changes to the original resource group in the tags, they will be reflected in the “databricks-xxx” resource group.  But you cannot change tag values in the databricks-xxx resource group directly.

Quick Tip:How to prevent your Azure Resources from accidental deletion? – Beyond the Horizon…

Summary

When using Azure Databricks, it can be confusing when a new workspace and managed resource group just appear. Azure automatically creates a Databricks workspace, as well as a managed resource group containing all the resources needed to run the cluster. This is protected by a system-level lock to prevent deletions and modifications. The only way to directly remove the lock is to delete the service. This can be a tremendous limitation if changes need to be made to tags in the managed resource group.  However, by making changes to the parent resource group, those changes will be correspondingly updated in the managed resource group.

Want to reap the full benefits of cloud computing? Reconsider your journey.

Rethink your cloud migration to get more benefits | Linktech Australia

There’s no denying that companies have realized many benefits from using public clouds – hyperscalability, faster deployment and, perhaps most importantly, flexible operating costs. Cloud has helped organizations gain access to modern applications and new technologies without many upfront costs, and it has transformed software development processes.

But when it comes to public cloud migration, many organizations are acting with greater discretion than it might at first appear. Enterprise IT spending on public cloud services is forecast to grow 18.4 percent in 2021 to total $304.9 billion, according to Gartner. This is an impressive number, but it’s just under 10 percent of the entire worldwide IT spending projected at $3.8 trillion over the same period. While cloud growth is striking, it pays to heed the context.

The data center still reigns

DATA CENTER Services - Bluebird Network

In 2021, spending on data center systems will become the second-largest area of growth in IT spending, just under enterprise software spending. And while much growth is attributed to hyperscalers, significant increase also comes from renewed enterprise data center expansion plans. Based on Anteelo Technology’s internal survey of its global enterprise customers, nearly all of them plan to operate in a hybrid cloud environment with nearly two-thirds of their technology footprint remaining on-premises over the next five years or longer. Uptime Institute’s 2020 Data Center Industry Survey also shows that a majority of workloads are operating in enterprise data centers.

Adopting cloud is a new way of life

How Cloud Computing Is Changing Management

Deciding what should move to the public cloud takes careful planning followed by solid engineering work. We are seeing that some enterprises, in rushing to the public cloud, don’t have an exit strategy for their current environments and data centers. We have all come across companies that started deploying multiple environments in the cloud but did not plan for changes in the way they develop, deploy and maintain applications and infrastructure. As a result, their on-premises costs stayed the same, while their monthly cloud bill kept rising.

Not everything should move to the public cloud. For example, many enterprises have been running key mission-critical business applications that require high transaction processing, high resiliency and high throughput without significant variation in demand due to seasonality. In these cases, protecting and supporting existing IT infrastructure investments and an on-premises data center or a mainframe modernization is more practical as moving such environments to the public cloud is complex and costly.

To achieve the full benefits, including cost benefits, let’s not forget the operational changes that using the public cloud requires — new testing paradigms, different development models, site reliability, security engineering and regulatory compliance — all of which require flexible teams and alternative ways of working and collaborating.

The key point: Enterprises are not moving everything to the public cloud because many critical applications are better suited for private data centers, while potentially availing themselves of private cloud capabilities.

How can Anteelo help?

6 ways cloud improves speed and performance - Work Life by Atlassian

With ample evidence that hybrid cloud is the best answer for large enterprise customers to successfully adopt a cloud strategy, employing Anteelo as your managed service provider, with our deep engineering, and infrastructure and application management experience, is a good bet. We hold a leading position in providing pure mainframe services globally and have the skills on hand to help customers with complex, enterprise-scale transformations.

Our purpose-built technology solutions, throughout the Enterprise Technology Stack, can reduce IT operating costs up to 30 percent. In running and maintaining mission-critical IT systems for our customers, we manage hundreds of data centers, hundreds of thousands of servers and have migrated nearly 200,000 workloads to the hybrid cloud, including businesses that use mainframe systems for their core, critical solutions. A hybrid cloud solution is the ideal, fit-for-purpose answer to meet many unique business demands.

The path to modernizing mission-critical applications - Cloud computing news

Customers want to migrate or modernize applications for many reasons. Croda International is a good example, with its phased approach for cloud migration. Whether moving to the public cloud, implementing a hybrid approach or enhancing non-cloud systems, Anteelo’s proven, integrated approach enables customers to achieve their goals in the quickest, most cost-effective way.

The lesson here: Be careful about drinking the public cloud-only Kool-Aid. With many cloud migrations falling short of their full, intended benefits, you need to assess the risks and rewards. More importantly, a qualified, experienced engineering team will not only help design the right plan, but will ensure that complications are quickly resolved — making for a smoother journey.

And most importantly, every enterprise should look at public cloud as part of its overall technology footprint, knowing that not everything is right for the cloud. Modernizing the technology in your environment should not be overlooked, since it may bring more timely results and better business outcomes, including improving your security posture.

WordPress 5.5 – Guide

WordPress 5.5 is here! 5 things you need to know • Yoast

WordPress has just released version 5.5 and it’s one of the most feature-packed updates since the launch of version 5.0 in December 2018. Here, we’ll look at some of its most useful and helpful advancements.

1. Gutenberg enhancements

WordPress Gutenberg 9.2 - Dozens of Improvements

The latest update sees further enhancements to the popular Gutenberg block editor which was first launched with WordPress 5.0. The interface has been tweaked to make it more user-friendly, more blocks have been added to build pages with and there are two new features: block patterns and block directory.

Block patterns are predefined block layouts that can be inserted onto your pages with settings already in place. They can save users a great deal of time and effort and can be tweaked if required. What’s great about the feature is that patterns can be created and shared, so while there are not many available at the moment, the intention is that developers will begin to create these predefined blocks and make them available in the same way that plugins are available now.

With the expectancy that the number of blocks and block patterns will rise dramatically, WordPress has introduced the block directory. Similar to the plugin and theme directory, it is designed to help users browse and search for the blocks and patterns they want to use.

2. Easier image editing

How to Easily Edit Images in WordPress

Any images inserted into the standard image block can now be edited without having to open them in the media library. Instead, they can be cropped, resized and rotated within the block itself. The biggest benefit of this is that you can see the changes straight away, saving users the hassle of going back and forth to the image library until they get the image exactly as they want it. Unfortunately, this doesn’t happen on other types of block, though it may be something seen in a future update.

3. Lazy-loading images

How to Implement WordPress Lazy Load on Images and Videos

Good news for those wanting their WordPress website to load faster is that version 5.5 makes lazy-loading the default image setting. This means images are only downloaded to a user’s browser as they scroll down the page towards them. By delaying the download of image files, the rest of the site can load on the browser much quicker. Not only is this great for the user experience; it will also help with SEO, with page speed being an important ranking factor.

4. Responsive content previews

Previewing Site Responsiveness in the Customizer – Make WordPress Core

While page previews have always been possible in WordPress, version 5.5 gives you the chance to view how your unpublished page will look on smartphones and tablets as well as on PCs. With Google’s drive towards ‘mobile-first’ website development, this can help the pages you publish to meet the search engine’s high expectations for how a website looks and works on mobile devices. Even more importantly, it will ensure your site continues to communicate effectively as the use of mobile browsing grows.

5. Default XML sitemaps

WordPress Sitemap Guide: What It Is and How to Use It

XML sitemaps are highly valuable files that enable search engine crawlers to index every part of your website. Without them, there’s a chance that parts of your site might not get indexed and, as a result, not be searchable on the internet. Indeed, it is possible to upload these files to Google’s Search Console so that any changes you make to your site can be indexed automatically without you having to wait for the search engine crawlers to seek them out.

Prior to this version, users needed a plugin to generate an XML sitemap, however, the 5.5 update generates them automatically.

6. Automatic updates for plugins

How to Enable Automatic Updates for WordPress Plugins

Finally, we come to our favourite feature: automatic updates. As a web host, the security of our customers’ websites is a major concern and one of the biggest threats comes from vulnerabilities in plugins. While these vulnerabilities are usually spotted and patched very quickly by developers, millions of websites don’t update to the newer versions quickly enough and this leaves them open to cyberattacks.

The easiest solution is to enable automatic updates and this has been possible for some time using plugins like Jetpack or, for users of cPanel, in the actual control panel. Thankfully, this feature has now been built into the WordPress core and so is available to every user without the need for third-party software, and this should make millions of websites far more secure.

However, as some WordPress websites rely on legacy plugins, the new version does not make automatic updates the default setting. Indeed, there is always a remote possibility that an update might cause a compatibility issue which you may wish to test before going live with it. However, if you are confident about the plugins you use and wish to enable automatic updates, you can do so in the ‘Plugins’ area of version 5.5.

Conclusion

As you can see, WordPress 5.5 is a major update providing some very useful new features. It will make it easier to build better pages and edit images, help websites to perform better, especially on mobile sites, it will improve SEO through faster loading and XML sitemaps, and it will enhance security by offering automatic updates.

What is commonly overlooked in B2B dynamic pricing solutions?

Dynamic Pricing in B2B eCommerce: Why it Matters

Nowadays, corporate executives recognize that analytics is pivotal for pricing teams to create solutions that enable them to achieve their firm’s pricing objectives.

In the B2B domain, ‘dynamic pricing’ is a critical approach to bring substantial benefits to companies.

  1. It enables them to predict when to raise prices to capture upside or reduce costs to avoid volume losses that eventually speed up their decision-making process.
  2. It considers various variables vital to determining a product’s desired price, such as demand, deal size, customer type, geography, competitors’ product price, product type, and many more.

With the appropriate set of technologies, advanced analytics, agile processes, and problem-solving skills, one can build a powerful dynamic pricing engine. During the design and development phase, the vendor(s) or internal team works closely with the pricing department to understand their objectives and get inputs on pricing solutions. After completion, price recommendations are passed on to the sales representatives. And the way they follow the recommendations determines the solution’s success.

Now, suppose a higher price is recommended for some customers, but the root cause is not explicit. In such a case, the sales representatives may be reluctant to use the recommendation for fear of losing sales.

The effectiveness of dynamic pricing depends on sales representatives

Dynamic Pricing: Benefits, Strategies and Examples : Price2Spy® Blog

Although pricing instructions are available to the sales reps, for them, the dynamic pricing solution is still a black box. Quite rightly, if they do not understand the rationale behind the price fluctuation for specific products/solutions, how will they negotiate with customers?

Many pricing teams overlook this aspect, which impacts the effectiveness of pricing solutions. However, there are multiple ways to get salespeople to accept dynamic pricing. Here’s how:

  1. The team responsible for building new dynamic-pricing processes and tools needs to incorporate the sales team’s knowledge into the system.
  2. Throughout the decision cycle, the sales representatives should be treated as partners, and the sales managers should be involved in the solution building process.
  3. Once the solution is ready, the pricing team and sales managers must explain the rationale behind the new price recommendations.
  4. This way, the salespersons can justify the new price.

All of this requires collaboration and extra time, but it is worth the extra effort.

Collaborative Overload

Besides, sales staff can also feed win and loss information back into the system to steadily improve the model’s accuracy and uncover new insights, thereby making Dynamic pricing self-reinforcing. This kind of involvement boosts their confidence in the solution and makes their experience countable. Moreover, incentive structures also need to be realigned so that sales reps are rewarded for following the recommendations. It means that agents will be compensated based on the recommended results generated by the pricing tool – Analytics can also help design this kind of Incentive Compensation.

A significant impact cannot come only from having a robust solution. The sales reps are equally crucial in enabling the last mile adoption of your dynamic pricing solution.

Why do you need a software-defined data centre in your hybrid cloud?

Software Defined Data Center(SDDC) Explained | IBM

If there is one thing that 2020 has taught us, it is that things can change on a dime. Over the last year, we have learned how to better cope with dramatic change in how we run our businesses – setting up remote working, creating more online services to satisfy customers’ new demands and migrating more applications to the cloud. But there’s more to do.

In these times, businesses are demanding even more agility and flexibility from their internal IT departments, which already have been under pressure to modernize data-center operations as the popularity of SaaS and the public cloud grows.

The trend toward data center virtualization is sure to intensify. In the current environment, we may need to reconsider how we think about and transition to the software-defined data center (SDDC). It’s increasingly important to have solid and standardized protocols and processes for SDDC to improve your company’s dexterity, scalability and costs. SDDC’s value also lies in its ability to improve resiliency, helping IT more seamlessly provision, operate and manage data centers through APIs in the midst of crisis.

Software Defined Data Center | What is SDDC – Happiest Minds

A well-groomed SDDC architecture primes an organization for its transformation journey to hybrid cloud. We won’t say that that journey is inevitable for everyone, but it’s way more likely than not.

Evidence of a growing movement to hybrid cloud comes from a recent Everest Group survey of 200 enterprises, which found that three out of four respondents said they have a hybrid-first or private-first cloud strategy, and 58% of enterprise workloads are on or expected to be on hybrid or private clouds. As much as companies may like the idea of moving everything to public cloud for its flexibility and cost benefits, it just isn’t practical for many reasons, including compliance and security concerns.

As a virtualized pool of resources, SDDC is the optimal foundation for hybrid cloud environments. It provides a common platform for both private and public clouds, automating resource assignments and tasks, simplifying and speeding application deployment, and being the backbone of a high-availability infrastructure.   Operational and IT labor costs shrink.

Make SDDC work for you

If you’ve already invested in SDDC software but aren’t seeing the returns you’d hoped for, you’re in the same boat as many other businesses. Companies often start on the road to virtualizing and automating their compute, storage and networking infrastructure, but they haven’t changed their thinking about how to operate the environment by reorganizing management functions with a code-based mindset.

It’s time to think differently.

Rise of DevOps - The Evolution of Software Development Life Cycle (SDLC)

The transition is not unlike what took place as DevOps software development practices overtook waterfall development, creating an environment where DevOps engineers came together with developers and IT operational staff to facilitate the creation of and regular release updates for products.

To get the most value from SDDC, you must merge the traditional functions of architecture, engineering, integration and operations teams into a DevOps kind of model to enhance the feedback loop and make improvements in the architecture/design.

Constant feedback loops need to be institutionalized. Adopting the SDDC infrastructure-as-code approach creates the continuous delivery pipelines for business applications that are critical to business competitiveness. Remember: If you can’t roll out solutions to answer customers’ needs at the speed of thought, you’re at risk of losing business to a rival that can.

Management silos need to break down and new tooling and processes must be standardized. There is no longer a need to invest in developing vendor-specific hardware operations skills. A culture shift is required for siloed network, storage and compute teams. There’s no room for managing discrete environments if your business is to achieve a complete, automated and cloud-ready SDDC. Integrated teams must be aligned to a single goal.

SDDC environments deliver other important benefits. Organizations with different IT environments in different regions suffer from a lack of consistency in hardware-oriented data center infrastructures. Replacing the confines and confusion of this setup with a hardware-agnostic approach using intelligent software streamlines the process of moving workloads across resources for better disaster recovery, business continuity and scalability.

SDDC matures for the digital era

Telefonica Taking on Transformation

Many considerations must go into the build-out of an SDDC. Businesses will find that the solutions and services available with Anteelo’s Enterprise Technology Stack set the groundwork for developing and refining SDDC capabilities.

It starts with our understanding and management of even the most complex customer environments, where we can apply our knowledge to help businesses understand the transformation journey. We can manage and maintain your SDDC, assisting you with everything from advising you about what applications are appropriate to live in the cloud to maintaining tight security controls.

Success in our digital era demands less complicated and more easily managed data centers. SDDC is the mature and sophisticated answer to that need.

5 Ways to Boost Sales on Your Product Pages

15 Great Product Pages that Turn Visitors into Customers

Getting visitors to your website requires a great deal of work and, for many businesses, quite a bit of advertising expenditure. What you don’t want is all this effort and money to go to waste. Once those visitors land on your product pages, you want them to buy your products. Some websites do this far more successfully than others and often the key factor is in the way the products pages are optimised for selling. In this post, we’ll give you 5 tips on how to make your product pages sell more.

  • Make sure you have an effective call to action

Call To Action Examples: Write An Effective CTA In 5 Steps | Ballantine

The ultimate aim of any product page is to sell the product. On a website, this means getting the visitor to carry out an action, usually clicking on a button which may say ‘Add to Basket’ or ‘Buy Now’. That button, or to be precise, the words written on it, is your call to action, i.e., it is directing the customer what to do.

The call to action is one of the most crucial elements on the page and if it is ineffective, it will impact on your conversion rates. To increase the effectiveness of your call to action button,the instructions need to be clear and carrying them out needs to be easy. The more complex it is, the fewer visitors will click. All it needs to do is guide the visitor on the next step of the buying journey.

In addition to being simple, it also has to be conspicuous. If it is hard to find, some customers are going to miss it, get frustrated and go shopping elsewhere. For this reason, it needs to be clearly visible, appropriately sized to catch attention and stand out from the other elements of the page, such as your product description. Using a contrasting colour for the button’s font and background can also help improve its chances of being clicked.

To improve effectiveness even more, you can use A/B split testing to test different versions of your call to action to see which of them has the biggest effect on conversions.

  • Use product images of the highest quality

COPPER PLATED BULLETS | Lead Extrusions

If people are going to buy something online, product photography and video are the only things that let them see what it looks like. It doesn’t take a genius to work out, therefore, that if the photographs are naff, the products featured in them aren’t going to look too good either. If your site has poor quality images, it’s unlikely to be achieving the levels of sales it could be doing.

Unfortunately for those sites, product images are some of the most powerful elements of a product page. When done well, not only do they give a thorough idea of what the product actually looks like, they also put the product in a setting that sells an aspiration that the customer wants to achieve. They won’t just see a vacuum cleaner, they’ll see the clean house with designer furniture that matches the lifestyle they aspire to.

It’s these clever images with their powerful messages that grab the customer’s attention and make them want to buy. To improve your product pages effectiveness, make sure you use high-quality images which show various views of the product and, if possible, show how it will improve the life of the purchaser. Key to this, however, is making sure that the images you use reflect the aspirations of your target audience and show off the identity of your brand.

  • Use product descriptions that sell

Product Description: 9 Examples of Product Descriptions that Sell

Many product descriptions fail to be effective because they focus too much on the features of a product and not enough on the benefits of owning it. What you need to consider is that when people buy something, they are looking for a solution. They want a product that will solve a problem, whether that’s a vacuum cleaner to make it easier to clean the house or a new jacket to make them feel good when they are going out.

An effective product description will illustrate how a feature solves a problem or benefits the consumer. For example, if a vacuum cleaner is bag less, then state the benefits that it is easier to empty and will save money on the cost of replacement bags.

While you may think it is obvious what the benefits are, this doesn’t mean you should assume the same for your customers. What’s more, it is possible to write the benefits to match the needs and aspirations of your target audience.

  • Write copy for people, not search engines

SEO Copywriting: How to Write Content For People and Optimize For Google

With so much focus on SEO and doing well in search engine results, the importance of how well the copy reads for a visitor is often overlooked. However, if all they find is a bulleted list of features and descriptions that are overloaded with keyword phrases, it is not going to keep them engaged.

Write copy that is interesting to read, speaks directly to the visitor and which includes the language that they use to describe the product – and if you need guidance on where to discover what they say, just look up the product or similar products on publicly available review sites.

Finally, remember that the voice and tone that you use in your writing should be one which is both appealing to your readers and which matches the identity of your brand.Â

  • Include FAQs, specifications and live chat

How to Write Effective FAQs: Complete With 10 Best Examples

One of the biggest advantages of buying from a bricks and mortar store is that there is always someone there who can deal with your questions. Those people who shop online will have the same questions but don’t always have the opportunity to find the answers. If you run an eCommerce site, you need to find out what those questions may be and provide the answers in an FAQ section. If not, your customers may buy from websites where the answers are available.

Over time, you’ll have received emails or online chat questions about your products and these should be the starting point for your FAQ section. Displaying a detailed product specification can also help provide answers.Â

The other key feature of many of today’s product pages is live chat. This enables a member of your team to answer any questions about a product there and then as well as deal with any other issues a customer has.

Conclusion

Effective product pages are critical to the success of any online store. In this post, we have looked at five different elements which can help improve overall sales. With better calls to action and product images, copy that focuses on benefits and which is written for people, not search engines, and with the addition of FAQs, specifications and live chat, hopefully, you can boost your sales too.Â

From machine intelligence to security and storage, AWS re:Invent opens up new options.

AWS re:Invent Security Recap: Launches, Enhancements, and Takeaways | AWS Security Blog

Technology as an enabler for innovation and process improvement has become the catchword for most companies. Whether it’s artificial intelligence and machine learning, gaining insights from data through better analytics capabilities, or the ability to transfer data and knowledge to the cloud, life sciences companies are looking to achieve greater efficiencies and business effectiveness.

Indeed, that was the theme of my presentation at the AWS re:Invent conference: the ability to innovate faster to bring new therapies to market, and how this is enabled by an as-a-service digital platform. For example, one company that had an increase in global activity needed help to accommodate the growth without compromising its operating standards. Rapid migration to an as-a-service digital platform led to a 23 percent reduction in its on-premises system.

This was my first re:Invent, and it was a real eye opener to attend such a large conference. The week-long AWS re:Invent conference, which took place in November 2018, brought together nearly 55,000 people in several venues in Las Vegas to share the latest developments, trends, and experiences of Amazon Web Services (AWS), its partners and clients.

The conference is intended to be educational, giving attendees insights into technology breakthroughs and developments, and how these are being put into use. Many different industries take part, including life sciences and healthcare, which is where my expertise lies.

re:Invent 2020 Liveblog: Machine Learning Keynote | AWS News Blog

This slickly organized, high-energy conference offered a massive amount of information shared across numerous sessions, but with a number of overarching themes. These included artificial intelligence, machine learning and analytics; serverless environments; and security, to mention just a few. The main objective of the meeting was to help companies get the right tool for the job and to highlight several new features.

During the week, AWS also rolled out new functionalities designed to help organizations manage their technology, information and businesses more seamlessly in an increasingly data-rich world. For the life sciences and healthcare industry — providers, payers and life sciences companies — a priority is being able to gain insights based on actual data so as to make decisions quickly.

re:Invent 2020 Liveblog: Machine Learning Keynote | AWS News Blog

That has been difficult to do in the past because data has existed in silos across the organization. But when you start to connect all the data, it’s clear that a massive amount of knowledge can be leveraged. And that’s critical in an age where precision medicine and specialist drugs have replaced blockbusters.

A growing number of life sciences companies recognize that to connect all this data — across the organization, with partner, and with clients — they need to move to the cloud. As such, cloud, and in particular major services such as AWS, are becoming more mainstream. There’s a growing need for platforms that allow companies to move to cloud services efficiently and effectively without disrupting the business, but at the same time make use of the deeper functionality a cloud service can provide.

Putting tools in the hands of users

AWS Control Tower | AWS Management & Governance Blog

One such functionality that AWS launched this year is Amazon Textract, which automatically extracts text and data from documents and forms. Companies can use that information in a variety of ways, such as doing smart searches or maintaining compliance in document archives. Because many documents have data in them that can’t easily be extracted without manual intervention, many companies don’t bother, given the massive amount of work that would involve. Amazon Textract goes beyond simple optical character recognition (OCR) to also identify the contents of fields in forms and information stored in tables.

Another key capability with advanced cloud platforms is the ability to carry out advanced analytics using machine learning. While many large pharma companies have probably been doing this for a while, the resources needed to invest in that level of analytics has been beyond the scope of most smaller companies. However, leveraging an observational platform and using AWS to provide that as a service puts these capabilities within the reach of life sciences companies of all sizes.

Having access to large amounts of data and advanced analytics enabled by machine learning allows companies to gain better insights across a wide network. For example, sponsors working with multiple contract research organizations want a single view of the performance at the various sites and by the different contract research organizations (CRO). At the moment, that can be disjointed, but by leveraging a portal through an observational platform, it’s possible to see how sites and CROs are performing: Are they hitting the cohort requirements set? Are they on track to meet objectives? Or, is there an issue that needs to be managed?

Security was another important theme at the conference and one that raised many questions. Most companies know theoretically that cloud is secure, but they’re less certain whether what they have in place gives them the right level of security for their business. That can differ depending on what you put in the cloud. In life sciences, if you are putting research and development systems into the cloud, it’s vital that your IT is secure. But with the right combination of cloud capabilities and security functionality, companies can get a more secure site there than they would on-premises.

The conference highlighted multiple new functions and services that help enterprises gain better value from moving to the cloud. These include AWS Control Tower, which allows you to automate the setup of a well-architected, multi-account AWS environment across an organization. Storage was also on the agenda, with discussions about getting the right options for the business. Historically, companies bought storage and kept it on-site. But these storage solutions are expensive to replace, and it’s questionable whether they are the best way forward for companies. During the re:Invent conference, AWS launched its new Glacier Deep Dive storage facility, which allows companies to store seldom-used data much more cost effectively than legacy tape systems, at just $1.01/TB per month. Consider the large amount of historical data that a legacy product will have. In all likelihood, that data won’t be needed very often, but for companies selling or acquiring a product or company, it may be important to have access to that data.

Video on Demand | Implementations | AWS Solutions

One of the interesting things I took from the week away, apart from a Fitbit that nearly exploded with the number of steps I took in a day, was how the focus on cloud has shifted. Now the discussion has turned to: “How do I get more from the cloud, and who can help me get there faster?” rather than: “Is the cloud the right thing for my business?” Conversations held when standing in queues waiting to get into events or onto shuttle buses were largely about what each organization is doing and what the next step in its digital journey would be. This was echoed in the Anteelo booth, where many people wanted more information on how to accelerate their journey. One of the greatest concerns was the lack of internal expertise many companies have, which is why having a partner allows them to get real value and innovation into the business faster.

Why Are These Big Name Brands Moving To The Cloud Technology?

Going to the Cloud: Stories from the Frontlines – Channel Futures

The economic turmoil caused by the pandemic has kickstarted the rapid adoption of cloud technology. Across the globe, companies in their housands are expanding the number of services they operate in the cloud in a bid to speed up digital transformation and put themselves in a better position to withstand the volatility of today’s marketplace. In this post, we’ll look at some major brands to discover why they have decided to migrate to the cloud over the last few months.

Coca-Cola

Coca-Cola - Wikipedia

Arguably the most recognisable brand in the world, Coca-Cola may have been making the same product for 128 years but its operations are strictly 21st century. Its manufacturing processes have long been massively automated and now, it has adopted a cloud-first policy with regard to IT.

As part of its digital transformation, the company has migrated to a hybrid cloud technology setup in a bid to reduce operational costs and increase IT resilience. This will enable it to deploy data analytics and artificial intelligence to provide it with insights that it can use to improve its services and operations.

Coca-Cola will use the migration to streamline its existing IT infrastructure and develop a company-wide platform for standardised business processes, technology and data. In order to integrate the public and private elements of its hybrid cloud, together with existing technology it plans to keep, it will deploy a single-dashboard, multi-cloud management system.

Finastra

Finastra - Wikipedia

UK-based fintech company, Finastra, is migrating to the cloud to accelerate not only its own digital transformation but those of its 8,000 global customers. The objective is to revolutionise the use of technology in the financial services sector by developing a platform that financial companies can use to speed up innovation and improve collaboration.

To achieve this, Finastra will migrate its entire customer base to the new cloud platform. From here, they will be able to create digital-first workplaces and provide their own clients with financial services and solutions, such as electronic notary services and electronic signatory, which are better suited to today’s digital world.

Major bank migrations: Deutsche Bank and HSBC

HSBC's reported job cuts signal that banks are struggling to find their postcrisis footing - MarketWatch

Two of the world’s major banks, Deutsche Bank and HSBC, have both announced plans for migrations over the last few weeks. A key element of its digital transformation, Deutsche Bank sees the cloud as being crucial for increasing revenue and minimising costs. It aims to make use of data science, artificial intelligence and machine learning to improve risk analysis and cash flow forecasting, as well as to develop digital communications that are easier for customers to interact with and which enhance the customer experience.

The German bank is also using the move to improve security, seeing it as a way to help it comply with data protection and privacy regulations and to ensure the integrity of customer data.

HSBC Holdings, the parent company of HSBC Bank, is adopting the cloud to benefit from its storage, compute, data analytics, AI, machine learning, database and container services, as well as for the cloud’s advanced security.

Its major goal is to provide more personalised and customer-centric banking services for its customers, for which it will develop customer-facing applications. It also intends to use the move to update its Global Wealth & Personal Banking division, develop new digital products and improve compliance.

Car manufacturer migrations: Daimler and Nissan

New Daimler boss could end Renault-Nissan partnership | Autocar

Two leading car manufacturers, Mercedes-Benz parent company, Daimler AG, and Nissan have also announced plans to adopt cloud technology. Daimler will migrate its after-sales portal to the public cloud to help it innovate and accelerate the development of new products and services for its global customer base, as well as to provide it with scalability. Like many other companies, it also sees cloud as being a secure platform and will use it to encrypt and store data to protect it from ransomware and hacking.

Nissan, meanwhile, is using the cloud primarily to help cut costs during the post-pandemic downturn. With poor sales throughout 2020, it views digital transformation as essential to remain agile and resilient.

The move will allow the car maker to store its vast quantities of data far less expensively than in-house and provide it with cost-effective, scalable processing resources. These it will use to undertake application-based, computational fluid dynamics and structural simulations which are needed to design its cars and test them for aerodynamics and structural issues. The cloud will also enable it to carry out performance and engineering simulations, helping it improve its vehicles’ fuel efficiency, reliability and safety.

UK public sector cloud initiative

IMImobile announces it has been included in the UK government G-Cloud initiative

The UK government has implemented a cloud-first policy in a bid to make the UK the world’s most digitally transformed nation. As part of the project, government departments, local authorities, the NHS, police and educational institutions will be encouraged to initiate cloud-based projects and take advantage of the speed, scalability and security of the public cloud.

To help bring this about, the government has established a digital marketplace on its website where public sector organisations can find approved service providers. Known as the G-Cloud (Government Cloud), these providers, which include eukhost, offer the advanced, secure and compliant cloud services, together with the technical expertise needed to make public sector digital transformation a reality.

Conclusion

As these use cases exemplify, cloud adoption and digital transformation are key to helping organisations cope with the impact of the current economic crisis and put them in a stronger position to innovate and prosper in the future. However, it is not just major brands that are making the move, businesses across the globe are moving quickly to take advantage of what cloud has to offer.

Cloud Necessary for Digital Transformation? – Here’s Why!

Why Cloud is an essential foundation of successful digital transformation?

Across the globe, organisations are acknowledging the need for digital transformation as new technologies, like data analytics, AI, ML and the IoT make traditional processes redundant and force unprogressive companies out of business. At the same time, shifting customer needs and behaviours demand companies undertake digital transformation in order to evolve. Without the adoption of cloud technology, however, much of this would not be possible. Here, we’ll explain why.

Organisations which have migrated to the cloud and undergone digital transformation experience both significant growth and improved efficiency. It has enabled them to develop new business models that keep them relevant and thriving in today’s dynamic and volatile marketplace. Thanks to cloud technology, they can innovate at pace, make informed, data-driven decisions and speed up the launch of products and services. What’s more, this is achieved more cost-effectively and efficiently.

1. Cost-effective IT solution

Cost Effective - WindSmart Systems

The cloud provides organisations with the opportunity to develop a much more cost-effective business model where the need to invest heavily in IT infrastructure is no longer required. By hosting their services and carrying out workloads on the infrastructure of their service provider, not only do they replace significant capital expenditure with less expensive service packages; they also forego many of the associated costs of operating a datacentre, including machine maintenance and server management.

2. Agility

The need for speed! | Geotab

The speed at which servers and software can be deployed in the cloud and the rapidity with which applications can be developed, tested and launched helps drive business growth. Additionally, this agility enables organisations to concentrate on more business-focused issues, such as security and compliance, product development or monitoring and analysis, instead of using up precious time and effort provisioning and maintaining IT resources. Together, these cloud attributes give companies a competitive advantage in the marketplace.

3. Scalability

Scalability Testing

Another key advantage that cloud brings to digital transformation is instant scalability. It provides businesses with a cost-effective, pay-per-use way of scaling up, on-demand, to ensure it always has the resources it needs to cope with spikes or to carry out large workloads. This means the expensive practice of purchasing additional servers to cater for busy periods but which are left redundant for much of the time is no longer necessary.

4. High availability

What is High Availability (HA) and Do I Need It? – Servers Australia

Today’s customers demand uninterrupted, 24/7 access to products and services and putting this in place is a key aim of many companies’ digital transformation. Similarly, some businesses rely on critical apps for processes, such as manufacturing, that also need to be operational at all times. What the cloud brings here is guaranteed high availability of 100% uptime. As cloud servers are virtual, instances can be moved between hardware and this means that downtime due to server failure becomes a thing of the past for cloud users. Indeed, even if an entire datacentre goes offline because of a natural disaster, service can be maintained by moving the instances to a datacentre in another geographical location.

5. Security and compliance

Meeting IT Security and Compliance Requirements with GoAnywhere MFT

Security and compliance are a high priority for all companies and are often a major challenge to those with in-house systems that lack both the budget and expertise to put effective measures into place.

The cloud can play a significant role in improving both security and compliance. Service providers employ highly skilled security experts and deploy advanced tools to protect their customer’s systems and data and to comply with their own stringent regulations. This ensures cloud users operate in highly secure environments, protected by next-gen firewalls with intrusion prevention systems and in-flow virus protection that detect and isolate threats before they reach a client’s server.

6. Built-in technology upgrades

6 Ways to Upgrade Your Business Technology | Startup Grind

Keeping up with the Joneses as far as technology is concerned is always a challenge for organisations, not simply for the cost of regularly purchasing newer hardware, but also the effort of migrating applications and data during the process.

By adopting cloud technology, companies no longer have this issue. Service providers regularly update their hardware in order to remain competitive themselves and this ensures that their customers benefit from always having the latest technology, such as Xeon processors and SSD hard drives, at their disposal. What’s more, virtualisation means any migration to new hardware takes place unnoticed.

7. Collaboration and remote working

25 Top Collaboration Tools for Remote Team Management - Blog - Shift

Digital transformation involves the replacing of outdated working practices and legacy systems with those that support innovation and agility. The cloud is the ideal environment for this, providing both the ability for remote working and improved collaboration. Many cloud-based platforms have been developed with collaboration in mind, offering video conferencing, file sharing, syncing and project management tools for teams to use in and out of the office. Files are instantly updated and are available anywhere with a connection; privileges and authentication can be determined for every employee, and projects, people and progress can be monitored and tracked.

Conclusion

Digital transformation is fast becoming a necessity for organisations, providing the means to help them be more agile, innovative, cost-effective and competitive while being better able to meet the needs of their customers. Cloud technology is instrumental in bringing this about as it offers the ideal environment in which to deploy the technologies and undertake the workloads on which digital transformation depends.

The platform to focus on the most valuable asset: Data-Centric Architecture.

Building a Data-Centric Architecture to Power Digital Business | Pure Storage Blog

The value proposition of global systems integrators (GSIs) has changed remarkably in the last 10 years. By 2010, it was the waning days of the so-called “your mess for less” (YMFL) business model. GSIs would essentially purchase and run a company’s IT shop and deliver value through right-shoring (moving labor to low cost places), leveraging supply chain economies of scale and, to a lesser degree, automation.

This model had been delivering value to the industry since the ‘90s but was nearing its asymptotic conclusion. To continue achieving the cost savings and value improvements that customers were demanding, GSIs had to add to their repertoire. They had to define, understand, engage and deliver in the digital transformation business. Today, I am focusing on the value GSIs offer by concentrating on their client’s data, rather than being fixated on the boxes or cloud where data resides.

In the YMFL business, the GSIs could zero in on the cheapest, performance compliant disk or cloud to house sets of applications, logs, analytics and backup data. The data sets were created and used by and for their corresponding purpose. Often, they were tenuously managed by sophisticated middleware and applications for other purposes, like decision support or analytics.

Getting a centralized view of the customer was difficult, if not impossible. First, it was due to the stove piping of the relevant data in an application-centric architecture. In tandem, data islands were created for analytics repositories.

Data-Centered Architecture - Design Your Software Architecture Using Industry-Standard Patterns - OpenClassrooms

Now enters the “Data Centric Architecture.” Transformation to a data-centric view is a new opportunity for GSIs to remain relevant and add value to customer’s infrastructures. It is a layer deeper than moving to cloud or migrating to the latest, faster, smaller boxes.

A great way to help jump start this transformation is by rolling out Data as a Service offerings. Rather than taking the more traditional Storage as a Service or Backup as a Service approach, Data as a Service anticipates and provides the underlying architecture to support a data-centric strategy.

It is first and foremost a repository for collected and aggregated data that is independent of application sources. From this repository, you can draw correlations, statistics, visualizations and advanced analytical insights that are impossible when dealing with islands of data managed independently.

It is more than the repository of the algorithmically derived data lake. A Data as a Service approach provides cost effective accessibility, performance, security and resilience – aimed at addressing the largest source of both complexity and cost in the landscape.

What Is Data-as-a-Service (DaaS)? | Hazelcast

Data as a Service helps achieve these goals by minimizing, simplifying and reducing the data and its movement within and outside of the enterprise and cloud environments. This is achieved around four primary use cases, which range from enterprise storage to backup and long-term retention:

 

Each of the cases illustrates the underlying capabilities necessary to cost effectively support the move to a data-centic architecture. Combined with a “never migrate or refresh again” evergreen approach, GSIs can focus on maximizing value in the stack of offerings. This approach is revolutionary.  In past, there was merely a focus on the refresh of aging boxes, or the specifications of a particular cloud service, or the infrastructure supporting a particular application. Today, GSIs can focus on the treasured asset in their customer’s IT — their data.

error: Content is protected !!