Here Is how Brexit is affecting your Web Hosting

Brexit and your web hosting - what's next? - Blog

As the likelihood of a no-deal Brexit increases, businesses throughout the UK will be taking stock of what they need to do come October 31. One area that many businesses might have overlooked is how a no-deal Brexit may affect their hosting. In this post, we’ll look at what the potential problems are and what plans you might need to put in place.

Do you have an EU based web host?

Best EU web hosting services of 2021 | TechRadar

Quite a few web hosts that operate in the UK are based in the EU and have their data centres located in European countries. This could cause several issues for UK customers in the case of a no-deal. As trade agreements with the UK would come to an end on October 31, it may mean that prices for hosting packages change as EU based hosts supplying the UK could be subject to tariffs.

This is by no means a certainty and even if tariffs are imposed, EU based hosts could adjust prices to counteract them. However, it is an issue which needs to be considered, especially as pound to euro currency values are likely to fluctuate considerably in the withdrawal aftermath.

Implications for data protection

What is GDPR? Everything you need to know about the new general data protection regulations | ZDNet

The EU has the world’s most stringent data protection laws – something the UK is currently signed up to. Part of its legislation requires that any data held on EU citizens that is stored on servers outside of the EU must have the same level of protection as that which is stored inside the EU.

In 2000, for example, the EU and USA implemented the Safe Harbour Agreement that enabled American companies to transfer personal data from European servers to those in the USA, on the proviso that the US provided privacy protection in line with EU directives. When in 2016, the US government announced that, for reasons of national security, it retained the right to access any EU citizens’ data stored on US servers, the European Court of Justice ruled that the Safe Harbour Agreement no longer offered adequate protection and was thus invalid. Businesses that used service providers which transferred their data to US servers found themselves at risk of substantial fines from the EU.

From October 2019, the UK finds itself in a similar position as the US. As it will no longer be part of the EU, it will have to prove that EU citizens’ data, held on UK servers, maintain the same level of protection as it currently does. This should not be an issue if the UK government keeps the existing laws in place, though it may need to implement a similar Safe Harbour Agreement as part of the process.

The potential problem, however, is that once the UK has left the EU, it is free to make its own data protection laws which may not satisfy the demands of the EU. That said, there are already big differences in attitudes to data protection within the EU and these are pushing some members to consider adopting their own data protection legislation. Such complexity makes it increasingly likely that the safest place for UK companies to store personal data is on servers based within the UK. This is especially so if the UK strengthens its data protection laws even further and considers UK citizens’ data held on EU servers to be inadequately protected.

.eu domain names

How to register a .EU domain from anywhere (or keep it after Brexit) | by Adam Rang | E-Residency Blog, E-residentsuse blogi | Medium

Earlier this year, the EU announced that following Brexit, .eu domains could only be registered to individuals or organizations which were geographically located within one of the remaining EU states. Consequently, UK citizens and UK-based organizations would no longer be allowed to register or renew a .eu domain. The only way for a UK company to have a .eu domain would be if it had a subsidiary located within the EU where it could transfer registration to. Any .eu domain which is currently registered to a UK citizen or UK-based business cannot be transferred to another UK-based organisation or be renewed. Eventually, all formerly UK-registered .eu domains will be revoked and made available for registration in the EU. This does not, however, apply to EU citizens living in the UK.

Should a no-deal Brexit takes place, the EU plans to withdraw .eu domains registered to UK organisations or individuals after two months, at which point they will cease to operate and will not be useable to host websites. Full revocation will take place 12 months following the UK withdrawal.


The uncertainty over Brexit is seeping into all areas of business, including your hosting. It can affect the price you pay for EU-based services, the places you store personal data, and even the right to the .eu top-level domain. With less than six months to go before the UK’s scheduled withdrawal, it may be time to take stock of your current hosting provider.

A better approach to Data Management, from Lakes to Watersheds


data managementAs a data scientist, I have a vested interest in how data is managed in systems. After all, better data management means I can bring more value to the table. But I’ve come to learn, it’s not how an individual system manages data but how well the enterprise, holistically, manages data that amplifies the value of a data scientist.

Many organizations today create data lakes to support the work of data scientists and analytics. At the most basic level, data lakes are big places to store lots of data. Instead of searching for needed data across enterprise servers, users pour copies into one repository – with one access point, one set of firewall rules (at least to get in), one password (hallelujah) … just ONE for a whole bunch of things.

Data scientists and Big Data folks love this; the more data, the better. And enterprises feel an urgency to get everyone to participate and send all data to the data lake. But, this doesn’t solve the problem of holistic data management. What happens, after all, when people keep copies of data that are not in sync? Which version becomes the “right” data source, or the best one?

If everyone is pouring in everything they have, how do you know what’s good vs. what’s, well, scum?

I’m not pointing out anything new here. Data governance is a known issue with data lakes, but lots of things relegated to “known issues” never get resolved. Known issues are unfun and unsexy to work on, so they get tabled, back-burnered, set aside.

Organizations usually have good intentions to go back and address known issues at some point, but too often, these challenges end up paving the road to Technical Debt Hell. Or, in the case of data lakes, making the lake so dirty that people stop trusting it.

To avoid this scenario, we need to go the source and expand our mental model from talking about systems that collect data, like data lakes, to talking about systems that support the flow of data. I propose a different mental model: data watersheds.

In North America, we use the term “watershed” to refer to drainage basins that encompass all waters that flow into a river and, ultimately, into the ocean or a lake. With this frame of reference, let’s contrast this “data flow” model to a traditional collection model.

In a data collection model, data analytics professionals work to get all enterprise systems contributing their raw data to a data lake. This is good, because it connects what was once systematically disconnected and makes it available at a critical mass, enabling comparative and predictive analytics. However, this data remains contextually disconnected.

Here is an extremely simplified view of four potential systematically and contextually disconnected enterprise systems: Customer Relationship Management (CRM), Finance/Accounting, Human Resources Information System (HRIS), and Supply Chain Management (SCM).

CRM Finance/Accounting HRIS SCM
Stores full client name and system generated client IDs Stores abbreviated customer names (tool has a too-short character limit though) and customer account numbers Stores all employee names, employee IDs
Stores products purchased; field manually updated by account manager Stores a list of all company Locations; uses 3-digit country codes Stores a list of all company locations with employee assignments; uses 2-digit country codes Maintains product list and system- generated product ID
Stores account manager names Stores abbreviated vendor names (same too-short character limit), vendor account numbers and vendor IDs with three leading zeros. Stores vendor names, vendor account numbers, vendor IDs (no leading zeros)
Stores Business Unit (BU) names and BU IDs Stores material IDs and names
Goal: Enable each account manager to track the product/contract history of each client Goal: Track all income, expenses and assets of the company Goal: Manage key details on employees Goal: Track all vendors, materials from vendors, Work in Progress (WIP), and final products


Let’s assume that each system has captured data to support its own reporting and then sends daily copies to a data lake. That means four major enterprise systems have figured out multiple privacy and security requirements to contribute to the data lake. I would consider this a successful data collection model.

Note, however, that the four systems have overlap in field names, and the content in each area is just a little off — not so far as to make the data unusable, but enough to make it difficult. (I also intentionally left out a good connection between CRM Clients and Finance/Accounting Customers in my example, because stuff like that happens when systems are managed individually. And while various Extract, Transform and Load (ETL) tools or Semantic layers could help, this is beyond CRM Client = Finance/Accounting Customer.)

If you think about customer lists, it’s not unreasonable for there to be hundreds, if not thousands, of customer records that, in this example, need to be reconciled with client names. This will have a significant impact on analytics.

Take an ad hoc operational example: Suppose a vendor can only provide half of the materials they normally provide for a key product. The company wants to prioritize delivery to customers who pay early, and they want to have account managers call all others and warn them of a delay. That should be easy to do, but because we are missing context between CRM and Finance/Accounting, and the CRM system is manually updated with products purchased, some poor employee will be staying late to do a lot of reconciling and create that context after the fact.

I’ve heard plenty of data professionals comment something like, “I spend 90% of my time cleaning data and 10% analyzing it on a project.” And the responses I hear are not, “Whaaaa?? You’re doing something wrong.” They are, “Oh man, I sooooo know what you mean.”

Whaaaa?? We’re doing something wrong.

The time analytics professionals spend cleaning and stitching data together is time not spent discovering correlations, connections and/or causation indicators that turn data into information and knowledge. This is ridiculous because today’s technologies can do so much of this work for us.

The point of a data watershed approach is to eliminate the missing context. The data watershed is not a technical model for how to get data into a lake; it’s a governance/technical model that ensures data has context when it enters a source system, and that context flows into the data lake.

If we return to my four example systems and take a watershed approach, the interaction looks more like this, with the arrows indicating how the data feeds each system:

Without data management, forget AI and machine learning in health care - Government Data Connection

While many organizations do have data flowing from system to system, they often don’t have connections between every system. Additionally, it’s not always clear who should “own” the master list for a field.

In my view, the system that maintains the most metadata around a field is the system that “owns” the master data for that field. So, in my example above, both the HR and Finance/Accounting systems maintain Location lists, but they use different country codes. Finance/Accounting is either going to maintain depreciation schedules or lease agreements on the locations, as well, thus Finance/Accounting wins. The HRIS system, unless there is a tool limitation, should mirror and, preferably, be fed the location data from the Finance/Accounting system.

In this example, when each system sends its data to a data lake, it has natural context. Data analytics professionals can grab any field and know the data is going to match – though I would argue that best practice would be to use the field from the “master” system. However, if everything is working right, this should be irrelevant.

Since a data watershed is a governance/technical model, it addresses, not just how data flows, but how it’s governed. This stewardship requires cross-departmental collaboration and accountability. The processes are neither new nor necessarily difficult – but the execution can be complex. The result is worth the effort though, as all enterprise data supports advanced analytics.

The governance model I picture is an amalgamation of DevOps – the merging of software development and IT operations – and the United Federation of Planets (UFP) from “Star Trek.”

By putting data management and data analytics together in the same way the industry has combined software developers and IT operations, there is less opportunity for conflicting priorities. And, any differences must be reconciled if the project hopes to succeed.

After borrowing from the DevOps paradigm, the reason the governance model I like best is the UFP – and not just because I get to drop a Trekkie reference – is because it is the government of a large fictional universe, built on the best practices and known failures of our own individual government structures.

The UFP has a central leadership body, an advising cabinet and semiautonomous member states. I think this set up is flexible enough to work with multiple organizational designs and enables holistic data management while addressing the nuances of individual systems.

I would expect the “President of the Federation” to be a Chief Information, Technology, Data, Analytics, etc. Officer. The “Cabinet” would be made up of Master Data Management (MDM), Records and Retention, Legal, HR, IT Operations, etc. And the “Council” members would be the analytics professionals from all the data-generating and -consuming business units in the organization.

And, it’s this last part – a sort of Vulcan Bill of Rights – I feel the strongest about:

Whoever is responsible for providing the analytics should be included in the governance of the data. Those who have felt the pain of munging data, know what needs to change – and they need to be empowered to change it.

Data watersheds represent an important shift in thinking. By expanding the data lake model to include the management of enterprise data at its source, we change the conversation to include data governance in the same breath as data analytics — always.

With this approach, data governance isn’t a “known issue” to be addressed by some and tabled by others; it’s an integral part of the paradigm. And while it may take more work to implement at the outset, the dividends from making the commitment are immense: Data in context.

Data Warehouse Benefits and Drawbacks

What Is the Benefit of Modern Data Warehousing?

As businesses gather and store ever greater quantities of data, managing it becomes increasingly challenging. To get the maximum value from it, it needs to be easily accessed and compiled so that it can be analysed. However, when it is stored in separate silos across numerous departments, this is hard to achieve. The solution that many companies are opting for in order to overcome these issues is data warehousing. In this post, we’ll look at the pros and cons of setting up a data warehouse.

What is a data warehouse?

Data Warehouse Overview - Data Warehouse Tutorial |

A data warehouse is a centralised storage space used by companies to securely house all their data. As such, it becomes a core resource from which the company can easily find and analyse the datasets it needs to generate timely reports and gain the meaningful insights needed to make important business decisions.

The pros of data warehousing

Pros and Cons of Snowflake Data Warehouse - Saras Analytics

The growing popularity of data warehousing is down to the benefits it provides business. Key, here, is that a unified data storage solution enhances decision making, enabling businesses to perform better in the marketplace and thus improve their bottom line. As a data warehouse also means data can be analysed faster, another advantage is that it puts the company in a better position to react to opportunities and threats that come their way.

With the entire array of the company’s data available to them, data managers can make more accurate market forecasts and do so quicker, helping them implement data-driven strategies swiftly and before their competitors. The accuracy of market forecasts is improved due to the warehouse’s ability to store huge amounts of historical data that can highlight patterns in market trends and shifting consumer behaviours over time.

Data warehousing can also help companies reduce expenditure by enabling them to make more cost-effective decisions, whether that’s in procurement, operations, logistics, communications or marketing. It can also massively improve the customer experience, with end to end customer journey mapping helping the company personalise product recommendations, issue timely and relevant communications, deliver better quality customer service and much more.

The cons of data warehousing

Data warehouse - Wikipedia

While the centralised storage of data brings many benefits, it does have some drawbacks that companies need to consider. For example, with such vast amounts of data in one place, finding and compiling the datasets needed for analyses can take time. However, not as long as would be needed if they were all kept in different silos.

Another potential issue is that when data is stored centrally, all the company’s data queries have to go through the warehouse. If the company’s system lacks the resources to deal with so many queries, this can slow down the speed at which data is processed. However, using a scalable cloud solution for data warehousing, where additional resources, charged on a pay per use basis, can be added as and when needed, eradicates this issue.

For many companies, the biggest obstacle for setting up a data warehouse is the cost. When undertaken in-house, there is often significant capital expenditure required for the purchase of hardware and software, together with the overheads of running the infrastructure. Additionally, there are ongoing staffing costs for experienced IT professionals. Again, the solution comes in the form of managed cloud services, like Infrastructure as a Service (IaaS), where the hardware and operating systems are provided without the need for capital expenditure and where software licencing can be significantly less expensive. What’s more, the service provider manages the infrastructure on your behalf, reducing staffing requirements. Even where specialised IT knowledge is required in-house, such as with integrating different systems, the 24/7 technical support from your provider will be there to offer expertise when needed.


Any company undergoing the process of digital transformation needs to consider the benefits of data warehousing. The centralised storage of all the company’s data is essential for companies that wish to integrate their existing business processes with today’s advanced digital technologies. Doing this means you can fully benefit from big data analytics, artificial intelligence and machine learning, and all the crucial insights they offer to drive the company forward.

Setting up a data warehouse in-house, however, presents several major challenges. There is significant capital expenditure required at the outset, together with on-going overheads. In addition, integrating a diverse set of company systems so that data can be centralised is not without its technical challenges. By opting for a cloud solution, however, cap-ex is removed, costs are lowered and many of the technical challenges are managed on your behalf.

error: Content is protected !!