Saying ‘No thanks’ to Microsoft Dynamics Support? Beware!

Microsoft Dynamics Support

After you’ve spent big dollars on a major ERP or CRM implementation with Microsoft Dynamics, springing for a post-implementation support contract seems like a lot to ask. But when you consider even just a few of the common issues that arise during a system’s lifecycle, it becomes obvious why DIY support is more expensive and riskier than contracting with a service partner. So, before you say, “No thanks, we’ve got it covered,” here are some things you should think about.

How do I…?

You’re going to have questions. Without a support agreement, who has the answers? Unless you have in-house functional and technical expertise, your best available resource will be online forums and sites like Dynamics 365 Community or the personal blogs of Dynamics MVP experts. You can find useful info here, but there are significant trade-offs. Don’t expect immediate replies to your questions or step-by-step guidance to resolve your “How do I…?” questions. No one on a forum is going to have the understanding of your business or the customizations you’ve made to give you a comprehensive answer. And just remember, free advice is often worth what you pay for it.

Updates

 Updates are critical to maintain system performance and security. Who’s going to manage those? Probably no one. It’s easy to overlook updates because they’re always there and you’ll get to them eventually. Except few people actually do, until they encounter a roadblock—like that new feature you need. To get that, you’ll need to catch up with a half dozen system updates and an upgrade or two. What should have been an easy, regular monthly process turns into a fire drill. You’ll need to find a developer who can apply the updates and resolve the issues those updates create, like the need to fix or restore customized features. The further you get behind on updates, the harder it is to bring your system up to date because there usually isn’t a direct path from the current version to multiple versions ahead. Lacking a resource to manage updates is what turns routine maintenance into a major source of business disruption and expense.

New needs

As your business needs change, so will your system needs. What will you pay for those new features? Who will create new reports and what will that cost? In all cases, the answer is “A lot.” Spot quotes, also known as statements of work (SOWs), for one-time services seem like a good deal, but over the course of months and years, what you spend for one-time fixes and changes can easily exceed the cost of a support agreement. Spot quotes take longer to spin up. By asking for one-time SOW services, there is a lead time to process the contract, staff the project, and begin work with people who don’t understand your business and aren’t familiar with your customized implementation. By having a support agreement already in place, you reduce your lead time to begin executing on your needs. And you’ll be working with someone who understands the unique needs of your business and your system.

Microsoft Dynamics Support

Big bugs

Eventually you’ll encounter an advanced issue or a bug that needs to be resolved. What’s your escalation path? Your resolution options are limited without a support agreement. Microsoft partners, however, have support agreements directly with Microsoft. When your contracted support team is unable to resolve a problem, they have direct access to the most authoritative sources.

The choice between a service agreement to support your new system or going without is often driven by the costs that are most apparent at the time. Because many needs remain unexpected and issues unforeseen, the decision to do it yourself seems like a natural choice. Over time, however, those hidden costs, unexpected issues and unmet needs begin to add up. And it soon becomes clear that avoiding the expense for support may have been the most expensive decision of all.

VMware’s Tanzu: Can Assist in charting a clear route to Modernised Apps.

VMware Tanzu

For most enterprises, the digital journey feels like a rough cab ride through New York City at the height of construction season. In rush hour. The route from legacy estate to modern platform is filled with detours, road closures, emergency braking and many other surprises. And, everyone in the cab has a different opinion about how to get there.

But getting there is critical. Every company recognizes that they need to transform their applications and operating models to respond to market changes and keep pace with emerging disruptors. A modern IT estate helps organizations transform business-critical systems and practices to support next generation operational models which are the lifeblood for new business models.  This forces entities to streamline and re-imagine processes to increase business agility and achieve heightened levels of customer and market insights.

That’s why we think the VMware’s release of Tanzu and solutions for delivering and managing cloud applications is an important step forward in the evolution of modern cloud platforms. Tanzu is a collection of products and services that will help companies break down silos between business units that have been perpetuated (and reinforced) by legacy IT.

VMware Tanzu

For example, companies spend a lot of time, energy and money managing traditional applications on legacy systems in their own way and with their own teams, resources and development methods. Meanwhile, the same organization manages modern applications on cloud in a different way, with a different staff, resources and methods. Tanzu enables companies to manage traditional workloads never designed for the cloud along with modern applications in the same way, viewing them in a single pane of glass.

Tanzu introduces traditional applications to a modern platform and also connects those applications with modern methods and tooling, preparing the way for future modernization efforts. It makes it easier for IT and development to create integrated, cross-functional teams that are organized around products and customers. And it helps organizations and teams evolve from individual approaches, capabilities and technologies to successful combinations that are applied to specific customer journeys in progressive engagements (tied directly to business outcomes) to achieve compounding business impact. This makes Tanzu much more than just another management console. It makes it a platform for transformation.

Many of our clients have ongoing IT and application modernization projects that will benefit from the Tanzu suite. We look forward to using Tanzu with our customers not only for application transformation, but also as a platform for teaching organizations how to operate in modern ways.

Modern platforms are the basis for creating modern applications, and they’re also the foundation for thinking and operating in new, disruptive ways. Tanzu will help companies finally bridge the divide between legacy solutions and modern applications to drive value, improve customer experiences, while radically reducing costs and driving efficiencies.

Hey! Have a look to the all new Online DevOps Dojo

DevOps

 

DevOps dojos have been wildly popular as on-site workshops that support an organization’s DevOps transformation. But even before COVID-19 and social distancing, in-person sessions had their limits. They could only reach a certain number of employees and customers. To bridge the gap, Anteelo has created Online DevOps Dojo — an open source, immersive learning experience for the DevOps community.

We designed the online dojo as an extension, not a replacement, for on-site DevOps dojos, but now that social distancing is the “new normal,” the timing for an online dojo is even more relevant.

Our primary goal, however, is not to offer training, but to contribute a set of DevOps learning experiences so the DevOps community can assemble and create more content in support of DevOps adoption and talent reskilling. We are eager to build a community around the Online DevOps Dojo — a community of module maintainers, translators, story tellers and even creators of new learning modules.

An immersive story

We designed the Online DevOps Dojo around a story, because there’s nothing better than a good story to get people immersed in something. The Online DevOps Dojo learning modules tell the story of a fictitious company, “Pet Clinic,” and its employees as they go through their DevOps journey. Throughout the modules, you learn about the characters, interact with them, and understand how each one plays a role in the DevOps transformation of Pet Clinic.

Online DevOps Dojo

The modules, a mix of cultural and technical topics, illustrate important DevOps patterns — how to lead change, version control, continuous integration and “shift left” security — as described in various blog posts, white papers and great books such as “Accelerate” by Nicole Forsgren, Jez Humble and Gene Kim.

The modules provide an interactive experience in which you can follow the step-by-step instructions as well as go off script to explore and learn more, without fear of breaking anything.

Participants can use the Online DevOps Dojo to:

  • Prepare for a face-to-face DevOps dojo, typically a 1- to 2-week event (that can also be held virtually), by learning techniques in advance
  • Create a complete curriculum with hands-on labs
  • Provide a way to get knowledge when you most need it, all within your browser
  • Share what “good looks like” when answering a question around any DevOps pattern
  • Leverage the story and characters, and even extend the story to create more learning experiences (not necessarily DevOps-related)

We have released the Online DevOps Dojo modules under the Mozilla Public License 2.0 so they are available to benefit the entire DevOps community.

DevOps Icon Set W Dev Ops Web Header Banner Royalty Free Cliparts, Vectors, And Stock Illustration. Image 129846534.

What’s next? Experiment and contribute

As you “shelter in place,” we encourage you to spend some of your time trying out the learning modules. We sincerely hope you’ll enjoy them and that they will help support your DevOps adoption. Depending on the reception of the initial launch, we also plan to release new modules.

Let’s support DevOps adoption! Start browsing the code of the Online DevOps Dojo, review the guidelines for contribution and ask a question by opening an issue in the GitHub repository.

A new way to test apps? – Testing as a Service (TaaS)

Testing as a Service(TAAS) -

There’s a better way to test the software applications powering your latest business services. It’s called Testing as a Service. TaaS helps to improve the quality of your applications in a way that’s faster, more scalable, simpler and cheaper than traditional testing approaches.

The need for a better, faster and more cost-effective way of testing applications is clear. Applications play an increasingly important role in the products and services that businesses deliver in today’s digital economy. However, ensuring that applications are offered effectively, securely and cost-effectively at speed remains a challenge.

Traditional testing models are neither predictable nor cost-effective. Because they often fail to leverage best-in-class processes and tools their quality may be lower than required. What’s more, the cost of traditional testing is based on the number of testing professionals involved, their skill sets and the engagement’s duration. So extra costs can easily accrue for certain skills, equipment, even desk space. Estimating your total cost can be tricky.

A more efficient way

TaaS is far more efficient than traditional testing models, too. Service providers offer an output-based delivery approach with an efficient and flexible consumption-based procurement model.

With TaaS, customers procure testing services from a catalog of standardized deliverables that they can use to assess the functionality, technical quality, performance and even security level of their applications. The deliverables are determined based on the testing outputs needed at different stages of the software development lifecycle. That might be the creation of a test strategy, plan or automated test script or the execution of an automated test.

To make that easy, TaaS supports both modern Agile and DevOps approaches as well as more traditional lifecycles such as Waterfall.

What really differentiates TaaS from traditional testing models is that customers gain tremendous flexibility. They can adjust the type and quantity of deliverables they receive, as well as when they receive them. Customers can quickly and easily tailor their testing to meet their changing project needs, and to scale testing services up or down based on their evolving business demand, avoiding unnecessary costs.

This kind of flexibility is especially important for Agile projects. There, the development teams may prioritize and re-prioritize from sprint to sprint, based on what will deliver the most value to the business.

Testing as a Service (TaaS)

More scaling, less cost

TaaS is highly scalable. If your testing volumes are lower than expected, the service can be scaled down. Conversely, if demand increases and testing needs to be scaled up, more testers can be quickly rolled in to create the necessary deliverables.

Also, since TaaS is based on output, the service provider is responsible for managing the staffing and availability of tools. This lets the customer focus on higher-value items. Because the service provider manages the day-to-day activities and resources of the testing team, adjusting capacities as needed to meet demand, there’s less complexity and hassle for customers. Costs are less, too, since the customer pays only for what they consume. Because consumption prices are fixed, estimating total costs is easy and accurate.

While TaaS tests are conducted via a public or private cloud, TaaS is not a cloud-only service. It can be used with both your cloud and on-premises systems. Test management software, if required, can also be included as part of the service. The license cost is included in the service, so you won’t have to invest in long-term licenses. That said, if you’ve already made those investments, TaaS can be easily configured to use your existing test-management products.

TaaS is also a lot more than Software as a Service. SaaS includes only the tools. But with TaaS, you also get test cases, test automation, execution and sophisticated tools.

Adopt TaaS and your benefits can include:

  • Testing services with higher quality, faster speed and lower costs
  • Greater flexibility and scalability to adjust testing to meet your evolving business needs
  • Increased testing coverage and efficiency
  • Freedom from up-front investments and maintenance fees

Here’s a real-world example of attaining lower costs: One of our customers wanted to merge the back-end systems of several companies it had recently acquired. This was to be a multi-year project that would include systems for manufacturing, packaging and shipping. The first phase needed to cover more than 300 requirements and 600 test-case executions. The company would also need to test more than 45 third-party application interfaces. With TaaS and our help, the company successfully completed this project phase at a cost of nearly 40 percent lower than originally forecast.

Migrating ERP to the Cloud

Migrating ERP to the cloud

Many large companies are considering migrating their ERP applications — including SAP S/4HANA — to the public cloudDrivers for many companies include cost, flexibility or adopting a cloud-first strategy.

 

Q: What questions do companies ask when they are considering moving ERP applications to a public cloud?

A: First, they ask “Can I move?” These are large, robust systems, and companies want to make sure their apps are technically capable of running in the cloud. The second question they ask is “Should I move?” They want to be confident that when they migrate their mission-critical applications, they can continue running their business. And the third question is, how expensive is a public cloud? We  manages, migrates and runs some of the most complex SAP systems in the world, so I know firsthand that SAP can run effectively in the cloud. SLAs and maintenance can continue in the cloud, and cloud security is well-covered. Cost reduction can be one of the big benefits of public cloud but only if the design, migration and run phases are properly managed.

 

Q: How do companies decide when the right time is to make the move?

A: Some of our customers have a good business case for waiting to upgrade, but most are exploring their options. Moving to SAP S/4HANA or other ERP systems is a big change, so why not move to cloud at the same time? These are big systems that handle companies’ finance and production lines, so there is often a limited maintenance window to bring them down. If a transition is on your roadmap, you’ll save yourself a step. And in the current macroeconomic climate, a slowdown in system usage is a good time to make changes in preparation for heavier future traffic.

 

Q: What business disruption can a company expect when it migrates to the cloud?

A: Moving to the cloud is not very different from any other migration. It’s become a routine process, and when done correctly, companies can expect close to near-zero downtime.

Companies can mitigate risks even more if they first test out the migration with one business unit and a single proof of concept. They also should clean up all the data they don’t need before migrating.

 

Q: What benefits have clients achieved?

A: The benefits are elasticity, agility and being able to grow on command. In the case of SAP, companies can tap into its new analytics capabilities as well as those of the ecosystem they gain when moving to the hyperscalers’ clouds.

If you want to look at business outcomes, consider Croda International, a global chemical manufacturer. It wanted to take advantage of the on-demand capabilities of cloud while increasing the ability of its IT department to respond to business demands. The company developed a strategy for moving to the cloud in anticipation of a major upgrade to SAP S/4HANA. By deploying our Platform as a Service for SAP on Azure as part of a low-risk, phased approach, 106 workloads were migrated in just 12 weeks.

 

Q: How should companies start the whole process?

A: Companies need to have true conversations — within the business and with their systems integrator. The new environments are richer and easier to consume and combine. At the same time, companies need to think about cost control and avoid lock-in or technical debt. They need to think about what they are trying to achieve, both digitally and in their business. They can introduce new technology. They can think about connecting their SAP landscape to machine learning or the internet of things (IoT). They can introduce new features to customers and suppliers. In the cloud, they can leverage so many things they weren’t able to access before. Thinking about the end state will allow companies to move their businesses forward.

Many large companies are considering migrating their ERP applications — including SAP S/4HANA — to the public cloud. Drivers for many companies include cost, flexibility or adopting a cloud-first strategy.

A 12-point operational resilience structure

Operational Resilience

Businesses today must take a new approach to operational resilience so that they can be more adept at anticipating disruptive events and agile in responding to and recovering from them.

In a world where risks and compliance requirements rapidly expand and evolve, it’s not a question of if there will be a disruption to your services, systems or processes but when.

An organization that improves its approach to operational resilience can greatly minimize its risk when business disruptions occur. That includes external disruptions such as natural disasters, extreme weather conditions and far-reaching medical crises. It also includes internal disruptions such as system outages.

Until quite recently, operational resilience was developed with a risk-avoidance mindset. It was focused on the likelihood of a particular type of disruption occurring, then planning accordingly for that. Given finite resources, disruptions that seemed relatively unlikely to occur were considered low risk and might not be planned for at all.

 

One size doesn’t fit all

That way of thinking no longer makes sense. Today, organizations face operational disruptions from more sources than ever before, and businesses must widen their planning scope. For instance, companies affected by the supply chain upsets we saw in the past year know now that they have to develop operational resiliency plans to handle those business disruptions. Cybersecurity threats, natural catastrophes, new laws and regulations and even such changes as new competitors entering the market continue to pose their own risks to operational resilience.

There is no one-size-fits-all solution for responding to such a wide range of operational risks. The capabilities needed to respond to a ransomware attack are different from those needed to recover from a massive wildfire.

Responses must also be tailored to align with organizations’ different requirements. A bank that survives a highly publicized cyberattack needs to repair its reputation. An online retailer whose site is shut down after a hurricane needs to restore 24×7 service availability. And a manufacturer with factories in a medical-crisis hot spot needs to provide a safer workplace for employees.

Operational Resilience

 

The operational resilience framework

Given all this, what should organizations do to improve their approach to operational resilience? We’ve developed a framework that identifies 12 management disciplines that can be grouped together in different ways to ensure appropriate operational resiliency responses for different risks. The core solutions and services that are available as part of the Operational Resilience Framework combine multiple components of our Enterprise Technology Stack, including applications, security and ITO.

You don’t have to tackle implementing these disciplines all at once. But together, they can enable and strengthen your organization’s operational resilience.

Here’s a quick rundown of the framework’s 12 disciplines and the actions they enable:

  • Continuity management: Analyze business impacts, set return-to-work tactics
  • Corporate incident response: Manage health, safety and environmental risks, proactively mitigate risk
  • Crisis management and communications: Orchestrate response plans, achieve a 360-degree of the current crisis status view
  • Critical enterprise assets: Discover, map and apply governance to key assets
  • Cyber and information security: Respond and recover from attacks
  • Governance, audit and compliance: Continuously monitor compliance, apply industry guidelines
  • IT disaster recovery: Minimize the impacts, structure and test recovery plans
  • Operational risk management: Identify and assess business risks, monitor and minimize issues
  • Organizational behavior: Drive and measure effective attitudes and practices
  • People and culture: Encourage an operating model for resilience
  • Service operations: Assure operational excellence and efficiency
  • Supply-chain management: Manage vendor risk, assure continuity

 

Real-world resilience

How do you apply this framework? Remember, different risks require different capabilities and responses.

In the case of cyber incidents, for example, your primary focus should be on bringing together capabilities including cyber and information security, IT disaster recovery, critical enterprise assets and governance, audit and compliance.

For business interruptions, you should focus first on building capabilities in areas including continuity management, crisis management and communications, and supply chain management.

Among the main areas of focus for ensuring compliance with new laws and regulations will be continuity management, operational risk management, and governance, audit and compliance.

What’s needed today is an approach to operational resilience that is holistic, free of functional silos and driven by a mindset of “not if, but when.” Organizations that develop this new resilience mindset and practices will be ready for just about anything.

The impact of low-code on Software Development

low-code in software development

An emerging way to program, known as low-code application development, is transforming the way we create software. With this new approach, we’re creating applications faster and more flexibly than ever before. What’s more, this work is being done by teams where up to three-quarters of the team members can have no prior experience in developing software.

Low-code development is one of the tools we deploy for our application services and solutions, which is a key component of our Enterprise Technology Stack. The approach is gaining traction. Research firm Gartner recently predicted that the worldwide market for low-code development technology will this year hit $13.8 billion, an increase over last year of nearly 23%. This rising demand for low-code technology is being driven by several factors, including the surge in remote development we have seen over the last year, digital disruption, hyper-automation and the rise of so-called composable businesses.

Low-code platforms are especially valuable to the public sector. Government IT groups need to be innovative and agile, yet they often struggle to be sufficiently responsive. Traditionally, they’ve developed applications using hard coding. While this approach offers a great deal of customization, it typically comes at the cost of long development times and high budgets. By comparison, low-code development is far faster, more agile and less costly.

With low-code platforms, public-sector developers no longer write all software code manually. Instead, they use visual “point-and-click” modeling tools — typically offered as a service in the cloud — to assemble web and mobile applications used by citizens. “Citizen developers” are a new breed of low-code programmers, who are potentially themselves public-sector end users, often with no prior development experience. The technology is relatively easy to learn and use.

Low-code tools are not appropriate for all projects. They’re best for developing software that involves limited volumes, simple workflows, straightforward processes and a predictable number of users. For these kinds of projects, we estimate that up to 80% of the development work can be done by citizen developers.

Experienced developers can benefit from using low-code tools, too. Over my own career of more than 30 years, I used traditional development methods three times to laboriously and methodically develop a mobile app. Now that I’ve adopted speedy low-code tools, I’ve already developed nine try-out mobile apps in just the last year.

Apps in a snap

To get a sense of just how quick working with low-code tools can be, consider a project we recently completed for the Government of Flanders. The project involves 96 vaccination centers the government is opening across Flanders. To track the centers’ inventories of vaccine doses and associated supplies, the government demanded a custom software application.

After a classical logistic software vendor passed on the project. We held our official kick-off meeting on Feb. 1, and just 18 days later, not only was our low-code inventory application up and running (with only a few open issues resolved in the subsequent days), but also the government’s first vaccination center was open for service. There’s no way we could have developed the application that quickly using traditional hard coding.

Low-code development can also be done with minimal staffing because all the ‘heavy lifting’ is done by the low-code environment. However, this is not an excuse to put new employees in charge of critical applications. Experienced staff are still needed to solve the classical issues of design, change management, planning, licenses, support, scoping and contracting.

We developed the Belgian vaccine inventory application with a code developer team of just two junior developers — one with the company for four months, the other for eight — working full time on the application. They were steered and supported by staff with more classical roles: a seasoned analyst, representative from the government, project manager, low-code (Microsoft) expert and solution architect. For these staff members, the low-code approach consumed only about 40% of their time.

Of course, we also leveraged agreements the government already had around Office 365 and Azure. But that’s yet another advantage of low-code software: It can exploit existing IT investments.

Low-Code Or No-Code Development

Finding flexibility

Flexibility is another big advantage of low-code development. In the context of software development, this means being able to make quick mid-course corrections. With traditional development tools, responding quickly to surprises that inevitably arise can be difficult if not impossible. But with low-code tools, making mid-course corrections is actually part of the original plan.

We had to make a mid-course correction when using low-code tools to develop an application to translate official government documents into all 18 languages spoken within Belgium.

Our first version of the software, despite using a reputable cognitive translation service, was missing three languages and providing sub-standard quality for another two. To fix this, our citizen developer — another new hire with no previous technical background — essentially clicked his way through the software’s workflows, integrated a second cognitive service from another supplier, and then created a dashboard indicating which of the two services the software should use when translating to a particular language.

Low-code development tools, when used in the right context, are fast, flexible and accessible to even first-time developers. And while these tools have their limitations and pitfalls, that’s seldom an excuse for not using them. Low-code tools are transforming how the public sector develops software, and we expect to see a lot more of it soon.

A better approach to Data Management, from Lakes to Watersheds

 

data managementAs a data scientist, I have a vested interest in how data is managed in systems. After all, better data management means I can bring more value to the table. But I’ve come to learn, it’s not how an individual system manages data but how well the enterprise, holistically, manages data that amplifies the value of a data scientist.

Many organizations today create data lakes to support the work of data scientists and analytics. At the most basic level, data lakes are big places to store lots of data. Instead of searching for needed data across enterprise servers, users pour copies into one repository – with one access point, one set of firewall rules (at least to get in), one password (hallelujah) … just ONE for a whole bunch of things.

Data scientists and Big Data folks love this; the more data, the better. And enterprises feel an urgency to get everyone to participate and send all data to the data lake. But, this doesn’t solve the problem of holistic data management. What happens, after all, when people keep copies of data that are not in sync? Which version becomes the “right” data source, or the best one?

If everyone is pouring in everything they have, how do you know what’s good vs. what’s, well, scum?

I’m not pointing out anything new here. Data governance is a known issue with data lakes, but lots of things relegated to “known issues” never get resolved. Known issues are unfun and unsexy to work on, so they get tabled, back-burnered, set aside.

Organizations usually have good intentions to go back and address known issues at some point, but too often, these challenges end up paving the road to Technical Debt Hell. Or, in the case of data lakes, making the lake so dirty that people stop trusting it.

To avoid this scenario, we need to go the source and expand our mental model from talking about systems that collect data, like data lakes, to talking about systems that support the flow of data. I propose a different mental model: data watersheds.

In North America, we use the term “watershed” to refer to drainage basins that encompass all waters that flow into a river and, ultimately, into the ocean or a lake. With this frame of reference, let’s contrast this “data flow” model to a traditional collection model.

In a data collection model, data analytics professionals work to get all enterprise systems contributing their raw data to a data lake. This is good, because it connects what was once systematically disconnected and makes it available at a critical mass, enabling comparative and predictive analytics. However, this data remains contextually disconnected.

Here is an extremely simplified view of four potential systematically and contextually disconnected enterprise systems: Customer Relationship Management (CRM), Finance/Accounting, Human Resources Information System (HRIS), and Supply Chain Management (SCM).

CRM Finance/Accounting HRIS SCM
Stores full client name and system generated client IDs Stores abbreviated customer names (tool has a too-short character limit though) and customer account numbers Stores all employee names, employee IDs
Stores products purchased; field manually updated by account manager Stores a list of all company Locations; uses 3-digit country codes Stores a list of all company locations with employee assignments; uses 2-digit country codes Maintains product list and system- generated product ID
Stores account manager names Stores abbreviated vendor names (same too-short character limit), vendor account numbers and vendor IDs with three leading zeros. Stores vendor names, vendor account numbers, vendor IDs (no leading zeros)
Stores Business Unit (BU) names and BU IDs Stores material IDs and names
Goal: Enable each account manager to track the product/contract history of each client Goal: Track all income, expenses and assets of the company Goal: Manage key details on employees Goal: Track all vendors, materials from vendors, Work in Progress (WIP), and final products

 

Let’s assume that each system has captured data to support its own reporting and then sends daily copies to a data lake. That means four major enterprise systems have figured out multiple privacy and security requirements to contribute to the data lake. I would consider this a successful data collection model.

Note, however, that the four systems have overlap in field names, and the content in each area is just a little off — not so far as to make the data unusable, but enough to make it difficult. (I also intentionally left out a good connection between CRM Clients and Finance/Accounting Customers in my example, because stuff like that happens when systems are managed individually. And while various Extract, Transform and Load (ETL) tools or Semantic layers could help, this is beyond CRM Client = Finance/Accounting Customer.)

If you think about customer lists, it’s not unreasonable for there to be hundreds, if not thousands, of customer records that, in this example, need to be reconciled with client names. This will have a significant impact on analytics.

Take an ad hoc operational example: Suppose a vendor can only provide half of the materials they normally provide for a key product. The company wants to prioritize delivery to customers who pay early, and they want to have account managers call all others and warn them of a delay. That should be easy to do, but because we are missing context between CRM and Finance/Accounting, and the CRM system is manually updated with products purchased, some poor employee will be staying late to do a lot of reconciling and create that context after the fact.

I’ve heard plenty of data professionals comment something like, “I spend 90% of my time cleaning data and 10% analyzing it on a project.” And the responses I hear are not, “Whaaaa?? You’re doing something wrong.” They are, “Oh man, I sooooo know what you mean.”

Whaaaa?? We’re doing something wrong.

The time analytics professionals spend cleaning and stitching data together is time not spent discovering correlations, connections and/or causation indicators that turn data into information and knowledge. This is ridiculous because today’s technologies can do so much of this work for us.

The point of a data watershed approach is to eliminate the missing context. The data watershed is not a technical model for how to get data into a lake; it’s a governance/technical model that ensures data has context when it enters a source system, and that context flows into the data lake.

If we return to my four example systems and take a watershed approach, the interaction looks more like this, with the arrows indicating how the data feeds each system:

Without data management, forget AI and machine learning in health care - Government Data Connection

While many organizations do have data flowing from system to system, they often don’t have connections between every system. Additionally, it’s not always clear who should “own” the master list for a field.

In my view, the system that maintains the most metadata around a field is the system that “owns” the master data for that field. So, in my example above, both the HR and Finance/Accounting systems maintain Location lists, but they use different country codes. Finance/Accounting is either going to maintain depreciation schedules or lease agreements on the locations, as well, thus Finance/Accounting wins. The HRIS system, unless there is a tool limitation, should mirror and, preferably, be fed the location data from the Finance/Accounting system.

In this example, when each system sends its data to a data lake, it has natural context. Data analytics professionals can grab any field and know the data is going to match – though I would argue that best practice would be to use the field from the “master” system. However, if everything is working right, this should be irrelevant.

Since a data watershed is a governance/technical model, it addresses, not just how data flows, but how it’s governed. This stewardship requires cross-departmental collaboration and accountability. The processes are neither new nor necessarily difficult – but the execution can be complex. The result is worth the effort though, as all enterprise data supports advanced analytics.

The governance model I picture is an amalgamation of DevOps – the merging of software development and IT operations – and the United Federation of Planets (UFP) from “Star Trek.”

By putting data management and data analytics together in the same way the industry has combined software developers and IT operations, there is less opportunity for conflicting priorities. And, any differences must be reconciled if the project hopes to succeed.

After borrowing from the DevOps paradigm, the reason the governance model I like best is the UFP – and not just because I get to drop a Trekkie reference – is because it is the government of a large fictional universe, built on the best practices and known failures of our own individual government structures.

The UFP has a central leadership body, an advising cabinet and semiautonomous member states. I think this set up is flexible enough to work with multiple organizational designs and enables holistic data management while addressing the nuances of individual systems.

I would expect the “President of the Federation” to be a Chief Information, Technology, Data, Analytics, etc. Officer. The “Cabinet” would be made up of Master Data Management (MDM), Records and Retention, Legal, HR, IT Operations, etc. And the “Council” members would be the analytics professionals from all the data-generating and -consuming business units in the organization.

And, it’s this last part – a sort of Vulcan Bill of Rights – I feel the strongest about:

Whoever is responsible for providing the analytics should be included in the governance of the data. Those who have felt the pain of munging data, know what needs to change – and they need to be empowered to change it.

Data watersheds represent an important shift in thinking. By expanding the data lake model to include the management of enterprise data at its source, we change the conversation to include data governance in the same breath as data analytics — always.

With this approach, data governance isn’t a “known issue” to be addressed by some and tabled by others; it’s an integral part of the paradigm. And while it may take more work to implement at the outset, the dividends from making the commitment are immense: Data in context.

error: Content is protected !!