Crucial ways by which Continuous Delivery improves your Security posture

How to automate compliance and security with Kubernetes: 3 ways | The Enterprisers Project

Continuous delivery yields a host of IT and operational benefits, including proven competitive advantages like faster deployment times, responses to customer feedback, and bug fixes.  But one aspect that tends not to make it on the marquee list of benefits — and should probably be headlining it — is security.

It’s really quite simple — with continuous delivery, cruical security enhancementst, updates and fixes to applications can be pushed live in a quick and timely manner to get the enhanced security into deployment. What could be better than that?

Traditional slow and batch-oriented waterfall approach

Threat Stack Launches New Unified Application Security Monitoring Solution | Threat Stack

Typically, in the traditional ITSM approach, when a security incident happens, it is captured and consolidated with other requirements to be addressed in the next application release. Sometimes an urgent patch release can be delivered sooner, in a few weeks – if it can rapidly progress through the cycle of fix, regression testing, release preparation, release testing and maintenance. But if the fix requires a major release, it could be many months until it can be made available, and in most cases, the only thing you can do in the meantime is document the incidents.

That’s too slow.

 A better, faster way — continuous delivery and DevSecOps

DevOps Market: Novel Approaches & Products | United States Cybersecurity Magazine

A modern service management approach combining continuous delivery and DevSecOps supports the core tenets of information security: data confidentiality, integrity, and availability.  A dedicated team provides continuous delivery by making small or incremental changes every day or multiple times a day. DevSecOps secures the continuous integration and delivery pipeline, as well as the content that’s coming through that pipeline.

You gain three key advantages:

Speed. Continuous delivery and DevSecOps dramatically improve security because they allow malicious attacks and bugs to be addressed as soon as they’re identified, not just added to some logbook. And in many cases, the window for action falls from between six and eight weeks down to minutes. Thus, far fewer incidents become problems that impact IT and business operations.

Consistency. IT teams working under traditional ITSM often worry that the continuous delivery and DevSecOps approach will create more opportunity for mistakes and bugs because more changes are happening more often. In practice, the exact opposite is true.

Flexibility. A DevSecOps approach simplifies the introduction of blue/green canary releases — implementing a new release while continuing to operate the prior release — into your delivery capacity. This allows you to redirect modest amounts of traffic to your new release, facilitating the identification of potential issues without drastically impacting many users. It also lets you rapidly shift all traffic back to the current release should a problem be identified.

The modern approach offers a variety of powerful tactics for quickly countering attacks. For example, workloads can be designed to move between cloud providers using Pivotal Cloud Foundry, containers or other homogenizing technology that offers the flexibility to move systems from one cloud provider to another. If there is a big denial of service attack in one provider, you could redeploy to another provider or back to a private data center with the click of a button. If an attack is focused on a particular IP, you recreate the environment at a new IP and block the other one completely. Structuring applications in this kind of push-button deployment mode creates opportunities for all sorts of similar scenarios.

How to move forward

Realizing the security benefits that come from implementing continuous integration and DevSecOps may require a deep, cultural change in the way your company builds and delivers software. Increasingly, security will become a secondary competency of developers, with risk ownership devolving from the central security team to application owners. In this new mode of operating, we need to make sure the right guard rails are in place and that the central security team provides necessary mentorship and support.

It’s a challenge, no question. But worth the rewards.

Successfully navigating some of these changes is explored in a recent post called How to jump start your enterprise digital transformation.” A seven-page paper, DevSecOps: Why security is essential, is another good resource.

Saying ‘No thanks’ to Microsoft Dynamics Support? Beware!

Microsoft Dynamics Support

After you’ve spent big dollars on a major ERP or CRM implementation with Microsoft Dynamics, springing for a post-implementation support contract seems like a lot to ask. But when you consider even just a few of the common issues that arise during a system’s lifecycle, it becomes obvious why DIY support is more expensive and riskier than contracting with a service partner. So, before you say, “No thanks, we’ve got it covered,” here are some things you should think about.

How do I…?

You’re going to have questions. Without a support agreement, who has the answers? Unless you have in-house functional and technical expertise, your best available resource will be online forums and sites like Dynamics 365 Community or the personal blogs of Dynamics MVP experts. You can find useful info here, but there are significant trade-offs. Don’t expect immediate replies to your questions or step-by-step guidance to resolve your “How do I…?” questions. No one on a forum is going to have the understanding of your business or the customizations you’ve made to give you a comprehensive answer. And just remember, free advice is often worth what you pay for it.

Updates

 Updates are critical to maintain system performance and security. Who’s going to manage those? Probably no one. It’s easy to overlook updates because they’re always there and you’ll get to them eventually. Except few people actually do, until they encounter a roadblock—like that new feature you need. To get that, you’ll need to catch up with a half dozen system updates and an upgrade or two. What should have been an easy, regular monthly process turns into a fire drill. You’ll need to find a developer who can apply the updates and resolve the issues those updates create, like the need to fix or restore customized features. The further you get behind on updates, the harder it is to bring your system up to date because there usually isn’t a direct path from the current version to multiple versions ahead. Lacking a resource to manage updates is what turns routine maintenance into a major source of business disruption and expense.

New needs

As your business needs change, so will your system needs. What will you pay for those new features? Who will create new reports and what will that cost? In all cases, the answer is “A lot.” Spot quotes, also known as statements of work (SOWs), for one-time services seem like a good deal, but over the course of months and years, what you spend for one-time fixes and changes can easily exceed the cost of a support agreement. Spot quotes take longer to spin up. By asking for one-time SOW services, there is a lead time to process the contract, staff the project, and begin work with people who don’t understand your business and aren’t familiar with your customized implementation. By having a support agreement already in place, you reduce your lead time to begin executing on your needs. And you’ll be working with someone who understands the unique needs of your business and your system.

Microsoft Dynamics Support

Big bugs

Eventually you’ll encounter an advanced issue or a bug that needs to be resolved. What’s your escalation path? Your resolution options are limited without a support agreement. Microsoft partners, however, have support agreements directly with Microsoft. When your contracted support team is unable to resolve a problem, they have direct access to the most authoritative sources.

The choice between a service agreement to support your new system or going without is often driven by the costs that are most apparent at the time. Because many needs remain unexpected and issues unforeseen, the decision to do it yourself seems like a natural choice. Over time, however, those hidden costs, unexpected issues and unmet needs begin to add up. And it soon becomes clear that avoiding the expense for support may have been the most expensive decision of all.

Self-Sovereign Identification raises the value of Data shared in an Ecosystem

Self-Sovereign Identity: A Distant Dream or an Immediate Possibility?
As organizations focus on data-driven business models to remain competitive, they will increasingly seek to collaborate with partners and exchange data. Data shared in an ecosystem is more valuable than data locked in a silo because it leads to new innovations and customer experiences. This trend is playing out in many industries including transportation, logistics, energy, manufacturing, healthcare, telecommunications and financial services. The effort of maintaining countless data repositories has made data acquisition very expensive, causing a drag on the overall competitiveness of an industry, not to mention the additional burden of adherence to the General Data Protection Regulation and personally identifiable information standards with respect to handling personal data. In the case of autonomous cars, for example, there is so much data to be had — and so much needed — that pooling it makes sense to get a sufficient amount for R&D in as timely and cost-effective a manner as possible. This moves the entire industry forward. Through a combination of internet of things, artificial intelligence, and distributed ledger technology (DLT) we will see auto manufacturers, fleet operators, OEMs and end users willing to share and exchange data as digitized assets through data marketplaces, enabling the players to benefit from new offerings based on the transparency and monetization of shared data.

Shifting from cars to mobility services

Shared Mobility–Changing the Landscape of Automotive Industry - FutureBridge

In the automotive industry, the market is shifting from selling cars to providing mobility services. A DLT-based system, combined with self-sovereign identity, makes it straightforward to build a mobility ecosystem where it’d be easy to enable new ways to engage with customers and partners, leveraging trusted and safe data exchange. For example, make it simple for customers to get the best deal on car financing (no need to re-enter their personal details to get approval at each dealership). Make customer interactions seamless by having customer history (i.e., loyalty data) immediately available with the customer’s explicit consent, without the need to store this data at each dealership or at a car manufacturer.

A trusted and verifiable data-enabled mobility ecosystem helps companies offer new services, such as car-sharing or car-exchanging that can involve dealers, for example, as drop-off and pick-up locations. This includes making the process of verifying auto insurance and driver qualifications seamless.

If I’m on vacation and don’t need my car, I can share it, or if I drive a compact car and need an SUV for a few days, I can connect with the ecosystem and benefit from this shared economy. In addition, the revenue I collect from the shared vehicle could be applied to a down payment on a new car.

Dealers benefit too. They can offer inspecting and cleaning services for the shared cars, and they get potential new customers at their location. Car makers benefit because they can grow their brand’s value and get insights into car usage.

Controlling identity, unblocking consent

Приложения для подключенных автомобилей: что с безопасностью? | Блог Касперского

Empowering an individual or an asset to control one’s own identity is a precursor to companies’ reaping the benefits of pooled data. Through self-sovereign identity, an individual will be able to consent, authenticate or verify themselves without having to present their documents. Users can access third-party-owned products and services while keeping their anonymity.

Many organizations and consortiums are contributing towards the establishment of open source decentralized identity to achieve interoperability among all participants, set protocols, and develop technologies and code in areas including decentralized identifiers (DID) and verification, storage and compute, authentication, claims and verifiable credentials. The focus of these efforts has been to decouple the trust between the identity provider and the relying party to create a more flexible and dynamic trust model such that the ecosystem benefits from increased market competition and customer choice.

In 2020, we expect to witness deployment of self-sovereign identity to unblock consent, which will help organizations leverage inaccessible data sets without breaching privacy regulations. This will facilitate decentralized data marketplaces that provide a level playing field for all market players by enabling any of them to monetize data. It allows for improved and customized offerings, thus setting higher standards and increasing the overall value of the ecosystem.

VMware’s Tanzu: Can Assist in charting a clear route to Modernised Apps.

VMware Tanzu

For most enterprises, the digital journey feels like a rough cab ride through New York City at the height of construction season. In rush hour. The route from legacy estate to modern platform is filled with detours, road closures, emergency braking and many other surprises. And, everyone in the cab has a different opinion about how to get there.

But getting there is critical. Every company recognizes that they need to transform their applications and operating models to respond to market changes and keep pace with emerging disruptors. A modern IT estate helps organizations transform business-critical systems and practices to support next generation operational models which are the lifeblood for new business models.  This forces entities to streamline and re-imagine processes to increase business agility and achieve heightened levels of customer and market insights.

That’s why we think the VMware’s release of Tanzu and solutions for delivering and managing cloud applications is an important step forward in the evolution of modern cloud platforms. Tanzu is a collection of products and services that will help companies break down silos between business units that have been perpetuated (and reinforced) by legacy IT.

VMware Tanzu

For example, companies spend a lot of time, energy and money managing traditional applications on legacy systems in their own way and with their own teams, resources and development methods. Meanwhile, the same organization manages modern applications on cloud in a different way, with a different staff, resources and methods. Tanzu enables companies to manage traditional workloads never designed for the cloud along with modern applications in the same way, viewing them in a single pane of glass.

Tanzu introduces traditional applications to a modern platform and also connects those applications with modern methods and tooling, preparing the way for future modernization efforts. It makes it easier for IT and development to create integrated, cross-functional teams that are organized around products and customers. And it helps organizations and teams evolve from individual approaches, capabilities and technologies to successful combinations that are applied to specific customer journeys in progressive engagements (tied directly to business outcomes) to achieve compounding business impact. This makes Tanzu much more than just another management console. It makes it a platform for transformation.

Many of our clients have ongoing IT and application modernization projects that will benefit from the Tanzu suite. We look forward to using Tanzu with our customers not only for application transformation, but also as a platform for teaching organizations how to operate in modern ways.

Modern platforms are the basis for creating modern applications, and they’re also the foundation for thinking and operating in new, disruptive ways. Tanzu will help companies finally bridge the divide between legacy solutions and modern applications to drive value, improve customer experiences, while radically reducing costs and driving efficiencies.

Hey! Have a look to the all new Online DevOps Dojo

DevOps

 

DevOps dojos have been wildly popular as on-site workshops that support an organization’s DevOps transformation. But even before COVID-19 and social distancing, in-person sessions had their limits. They could only reach a certain number of employees and customers. To bridge the gap, Anteelo has created Online DevOps Dojo — an open source, immersive learning experience for the DevOps community.

We designed the online dojo as an extension, not a replacement, for on-site DevOps dojos, but now that social distancing is the “new normal,” the timing for an online dojo is even more relevant.

Our primary goal, however, is not to offer training, but to contribute a set of DevOps learning experiences so the DevOps community can assemble and create more content in support of DevOps adoption and talent reskilling. We are eager to build a community around the Online DevOps Dojo — a community of module maintainers, translators, story tellers and even creators of new learning modules.

An immersive story

We designed the Online DevOps Dojo around a story, because there’s nothing better than a good story to get people immersed in something. The Online DevOps Dojo learning modules tell the story of a fictitious company, “Pet Clinic,” and its employees as they go through their DevOps journey. Throughout the modules, you learn about the characters, interact with them, and understand how each one plays a role in the DevOps transformation of Pet Clinic.

Online DevOps Dojo

The modules, a mix of cultural and technical topics, illustrate important DevOps patterns — how to lead change, version control, continuous integration and “shift left” security — as described in various blog posts, white papers and great books such as “Accelerate” by Nicole Forsgren, Jez Humble and Gene Kim.

The modules provide an interactive experience in which you can follow the step-by-step instructions as well as go off script to explore and learn more, without fear of breaking anything.

Participants can use the Online DevOps Dojo to:

  • Prepare for a face-to-face DevOps dojo, typically a 1- to 2-week event (that can also be held virtually), by learning techniques in advance
  • Create a complete curriculum with hands-on labs
  • Provide a way to get knowledge when you most need it, all within your browser
  • Share what “good looks like” when answering a question around any DevOps pattern
  • Leverage the story and characters, and even extend the story to create more learning experiences (not necessarily DevOps-related)

We have released the Online DevOps Dojo modules under the Mozilla Public License 2.0 so they are available to benefit the entire DevOps community.

DevOps Icon Set W Dev Ops Web Header Banner Royalty Free Cliparts, Vectors, And Stock Illustration. Image 129846534.

What’s next? Experiment and contribute

As you “shelter in place,” we encourage you to spend some of your time trying out the learning modules. We sincerely hope you’ll enjoy them and that they will help support your DevOps adoption. Depending on the reception of the initial launch, we also plan to release new modules.

Let’s support DevOps adoption! Start browsing the code of the Online DevOps Dojo, review the guidelines for contribution and ask a question by opening an issue in the GitHub repository.

A new way to test apps? – Testing as a Service (TaaS)

Testing as a Service(TAAS) -

There’s a better way to test the software applications powering your latest business services. It’s called Testing as a Service. TaaS helps to improve the quality of your applications in a way that’s faster, more scalable, simpler and cheaper than traditional testing approaches.

The need for a better, faster and more cost-effective way of testing applications is clear. Applications play an increasingly important role in the products and services that businesses deliver in today’s digital economy. However, ensuring that applications are offered effectively, securely and cost-effectively at speed remains a challenge.

Traditional testing models are neither predictable nor cost-effective. Because they often fail to leverage best-in-class processes and tools their quality may be lower than required. What’s more, the cost of traditional testing is based on the number of testing professionals involved, their skill sets and the engagement’s duration. So extra costs can easily accrue for certain skills, equipment, even desk space. Estimating your total cost can be tricky.

A more efficient way

TaaS is far more efficient than traditional testing models, too. Service providers offer an output-based delivery approach with an efficient and flexible consumption-based procurement model.

With TaaS, customers procure testing services from a catalog of standardized deliverables that they can use to assess the functionality, technical quality, performance and even security level of their applications. The deliverables are determined based on the testing outputs needed at different stages of the software development lifecycle. That might be the creation of a test strategy, plan or automated test script or the execution of an automated test.

To make that easy, TaaS supports both modern Agile and DevOps approaches as well as more traditional lifecycles such as Waterfall.

What really differentiates TaaS from traditional testing models is that customers gain tremendous flexibility. They can adjust the type and quantity of deliverables they receive, as well as when they receive them. Customers can quickly and easily tailor their testing to meet their changing project needs, and to scale testing services up or down based on their evolving business demand, avoiding unnecessary costs.

This kind of flexibility is especially important for Agile projects. There, the development teams may prioritize and re-prioritize from sprint to sprint, based on what will deliver the most value to the business.

Testing as a Service (TaaS)

More scaling, less cost

TaaS is highly scalable. If your testing volumes are lower than expected, the service can be scaled down. Conversely, if demand increases and testing needs to be scaled up, more testers can be quickly rolled in to create the necessary deliverables.

Also, since TaaS is based on output, the service provider is responsible for managing the staffing and availability of tools. This lets the customer focus on higher-value items. Because the service provider manages the day-to-day activities and resources of the testing team, adjusting capacities as needed to meet demand, there’s less complexity and hassle for customers. Costs are less, too, since the customer pays only for what they consume. Because consumption prices are fixed, estimating total costs is easy and accurate.

While TaaS tests are conducted via a public or private cloud, TaaS is not a cloud-only service. It can be used with both your cloud and on-premises systems. Test management software, if required, can also be included as part of the service. The license cost is included in the service, so you won’t have to invest in long-term licenses. That said, if you’ve already made those investments, TaaS can be easily configured to use your existing test-management products.

TaaS is also a lot more than Software as a Service. SaaS includes only the tools. But with TaaS, you also get test cases, test automation, execution and sophisticated tools.

Adopt TaaS and your benefits can include:

  • Testing services with higher quality, faster speed and lower costs
  • Greater flexibility and scalability to adjust testing to meet your evolving business needs
  • Increased testing coverage and efficiency
  • Freedom from up-front investments and maintenance fees

Here’s a real-world example of attaining lower costs: One of our customers wanted to merge the back-end systems of several companies it had recently acquired. This was to be a multi-year project that would include systems for manufacturing, packaging and shipping. The first phase needed to cover more than 300 requirements and 600 test-case executions. The company would also need to test more than 45 third-party application interfaces. With TaaS and our help, the company successfully completed this project phase at a cost of nearly 40 percent lower than originally forecast.

Enhancing project management by extending the Scrum framework

How Effective is the Scrum Framework for Project Management

Scrum — a management framework that focuses on cutting through complexity to better meet business needs — maps out the elements required to run a successful project. However, many projects need to go beyond the typical Daily, Planning, Review and Retro meeting sessions used in the Scrum process to be truly successful.

There are ways to enhance your Scrum framework, increase control and improve cooperation with key stakeholders. Below are suggestions for additional meetings, taken straight from Anteelo best practices, that can help keep your process on track.

Scrum Master/Product Owner meeting (SM/PO)

Who Should Attend: Scrum Master and Product Owner

When Should They Meet: Recommended weekly, depending on the size of your backlog

As the name suggests, this meeting takes place between the SM and PO. Since neither is required at the Daily Scrum, they do not have a formal time to share any news, focus on issues or align the responsibilities. But for teams with differing agile experience, the responsibilities of one and another might seem fuzzy. This meeting is dedicated to clearing out that fuzziness.

In an ideal world, every scrum project would be 100% compliant with the Scrum Guide. However, these projects are very hard to come by. That’s why one of the main goals for SM is to create and implement the final vision of the scrum framework for the project, one that’s unique, tailored to your project and containing your own “flavor.” This requires cooperation and alignment between SM and PO.

A useful tool to manage your work is the SM/PO Backlog. This is a separate document with a list of issues or questions you should solve to keep your project running. Examples include trainings for the development team, documentation or stakeholder management.

Risks, Issues, Opportunities Management (RIOM) meeting

Who Should Attend: SM, PO, Program Manager, Development Team, relevant stakeholders

When Should They Meet: Recommended weekly, although it could be less often if merged with another meeting

The primary goal of RIOM is avoiding foreseeable issues. This is seemingly handled during the Daily Standups, when the development team shares any potential threats to the Sprint Goal. On the other hand, fixating only on the short-term risks may result in losing the big picture.

To counter that, we recommend having a separate meeting with relevant stakeholders. Inviting the development team makes sense, as they have hands-on knowledge of risks, issues or opportunities.

Steering Board meeting

Who Should Attend: Steering board, SM, PO

When Should They Meet: Should take place at least once a sprint. A rule of thumb is bi-weekly.

Every project needs control. Scrum makes it easier thanks to transparency and cooperation with the client. Ideally your Product Owner is part of the client’s organization, having a clear communication channel with business stakeholders.

As a Scrum Master, you still want to be able to inform or influence your stakeholders. Experience shows that Review meeting is not enough to avoid hazardous communication gaps. The overall status of the project needs to be tracked and managed closely. The more information shared with stakeholders, the more support you can muster from management.

Transparency is encouraged in the agile approach, and meeting with your Steering Board is crucial to ensure communication gaps are bridged before they cause delays or harm your project. Steering Board meetings can be successfully combined with RIOM.

Roadmap update meeting

Who Should Attend: Scrum Team

When Should They Meet: Quarterly and/or after major business changes of the project

Product Backlog is one of the key tools for Product Owner. Backlog is supplemented beautifully with the Product Roadmap. Product Roadmap outlines high-level directions, functionalities or areas of product development over a 1-year period (could be longer). In other words, it helps express the product vision.

PO is responsible for keeping the roadmap up to date. This document gives the Scrum Team a better understanding of how the sprints align with the overall direction of the project. Although roadmap is created and maintained by PO (in conjunction with the business stakeholders), it’s vital that the Scrum Team stays informed.

This is only a handful of meetings to enhance your Scrum framework, providing  guidelines about possible shortcomings. Ultimately, it’s up to the Scrum Master to tailor the framework to the needs of the project.

Migrating ERP to the Cloud

Migrating ERP to the cloud

Many large companies are considering migrating their ERP applications — including SAP S/4HANA — to the public cloudDrivers for many companies include cost, flexibility or adopting a cloud-first strategy.

 

Q: What questions do companies ask when they are considering moving ERP applications to a public cloud?

A: First, they ask “Can I move?” These are large, robust systems, and companies want to make sure their apps are technically capable of running in the cloud. The second question they ask is “Should I move?” They want to be confident that when they migrate their mission-critical applications, they can continue running their business. And the third question is, how expensive is a public cloud? We  manages, migrates and runs some of the most complex SAP systems in the world, so I know firsthand that SAP can run effectively in the cloud. SLAs and maintenance can continue in the cloud, and cloud security is well-covered. Cost reduction can be one of the big benefits of public cloud but only if the design, migration and run phases are properly managed.

 

Q: How do companies decide when the right time is to make the move?

A: Some of our customers have a good business case for waiting to upgrade, but most are exploring their options. Moving to SAP S/4HANA or other ERP systems is a big change, so why not move to cloud at the same time? These are big systems that handle companies’ finance and production lines, so there is often a limited maintenance window to bring them down. If a transition is on your roadmap, you’ll save yourself a step. And in the current macroeconomic climate, a slowdown in system usage is a good time to make changes in preparation for heavier future traffic.

 

Q: What business disruption can a company expect when it migrates to the cloud?

A: Moving to the cloud is not very different from any other migration. It’s become a routine process, and when done correctly, companies can expect close to near-zero downtime.

Companies can mitigate risks even more if they first test out the migration with one business unit and a single proof of concept. They also should clean up all the data they don’t need before migrating.

 

Q: What benefits have clients achieved?

A: The benefits are elasticity, agility and being able to grow on command. In the case of SAP, companies can tap into its new analytics capabilities as well as those of the ecosystem they gain when moving to the hyperscalers’ clouds.

If you want to look at business outcomes, consider Croda International, a global chemical manufacturer. It wanted to take advantage of the on-demand capabilities of cloud while increasing the ability of its IT department to respond to business demands. The company developed a strategy for moving to the cloud in anticipation of a major upgrade to SAP S/4HANA. By deploying our Platform as a Service for SAP on Azure as part of a low-risk, phased approach, 106 workloads were migrated in just 12 weeks.

 

Q: How should companies start the whole process?

A: Companies need to have true conversations — within the business and with their systems integrator. The new environments are richer and easier to consume and combine. At the same time, companies need to think about cost control and avoid lock-in or technical debt. They need to think about what they are trying to achieve, both digitally and in their business. They can introduce new technology. They can think about connecting their SAP landscape to machine learning or the internet of things (IoT). They can introduce new features to customers and suppliers. In the cloud, they can leverage so many things they weren’t able to access before. Thinking about the end state will allow companies to move their businesses forward.

Many large companies are considering migrating their ERP applications — including SAP S/4HANA — to the public cloud. Drivers for many companies include cost, flexibility or adopting a cloud-first strategy.

A 12-point operational resilience structure

Operational Resilience

Businesses today must take a new approach to operational resilience so that they can be more adept at anticipating disruptive events and agile in responding to and recovering from them.

In a world where risks and compliance requirements rapidly expand and evolve, it’s not a question of if there will be a disruption to your services, systems or processes but when.

An organization that improves its approach to operational resilience can greatly minimize its risk when business disruptions occur. That includes external disruptions such as natural disasters, extreme weather conditions and far-reaching medical crises. It also includes internal disruptions such as system outages.

Until quite recently, operational resilience was developed with a risk-avoidance mindset. It was focused on the likelihood of a particular type of disruption occurring, then planning accordingly for that. Given finite resources, disruptions that seemed relatively unlikely to occur were considered low risk and might not be planned for at all.

 

One size doesn’t fit all

That way of thinking no longer makes sense. Today, organizations face operational disruptions from more sources than ever before, and businesses must widen their planning scope. For instance, companies affected by the supply chain upsets we saw in the past year know now that they have to develop operational resiliency plans to handle those business disruptions. Cybersecurity threats, natural catastrophes, new laws and regulations and even such changes as new competitors entering the market continue to pose their own risks to operational resilience.

There is no one-size-fits-all solution for responding to such a wide range of operational risks. The capabilities needed to respond to a ransomware attack are different from those needed to recover from a massive wildfire.

Responses must also be tailored to align with organizations’ different requirements. A bank that survives a highly publicized cyberattack needs to repair its reputation. An online retailer whose site is shut down after a hurricane needs to restore 24×7 service availability. And a manufacturer with factories in a medical-crisis hot spot needs to provide a safer workplace for employees.

Operational Resilience

 

The operational resilience framework

Given all this, what should organizations do to improve their approach to operational resilience? We’ve developed a framework that identifies 12 management disciplines that can be grouped together in different ways to ensure appropriate operational resiliency responses for different risks. The core solutions and services that are available as part of the Operational Resilience Framework combine multiple components of our Enterprise Technology Stack, including applications, security and ITO.

You don’t have to tackle implementing these disciplines all at once. But together, they can enable and strengthen your organization’s operational resilience.

Here’s a quick rundown of the framework’s 12 disciplines and the actions they enable:

  • Continuity management: Analyze business impacts, set return-to-work tactics
  • Corporate incident response: Manage health, safety and environmental risks, proactively mitigate risk
  • Crisis management and communications: Orchestrate response plans, achieve a 360-degree of the current crisis status view
  • Critical enterprise assets: Discover, map and apply governance to key assets
  • Cyber and information security: Respond and recover from attacks
  • Governance, audit and compliance: Continuously monitor compliance, apply industry guidelines
  • IT disaster recovery: Minimize the impacts, structure and test recovery plans
  • Operational risk management: Identify and assess business risks, monitor and minimize issues
  • Organizational behavior: Drive and measure effective attitudes and practices
  • People and culture: Encourage an operating model for resilience
  • Service operations: Assure operational excellence and efficiency
  • Supply-chain management: Manage vendor risk, assure continuity

 

Real-world resilience

How do you apply this framework? Remember, different risks require different capabilities and responses.

In the case of cyber incidents, for example, your primary focus should be on bringing together capabilities including cyber and information security, IT disaster recovery, critical enterprise assets and governance, audit and compliance.

For business interruptions, you should focus first on building capabilities in areas including continuity management, crisis management and communications, and supply chain management.

Among the main areas of focus for ensuring compliance with new laws and regulations will be continuity management, operational risk management, and governance, audit and compliance.

What’s needed today is an approach to operational resilience that is holistic, free of functional silos and driven by a mindset of “not if, but when.” Organizations that develop this new resilience mindset and practices will be ready for just about anything.

A New Approach For Designing Citizen Services

Citizen service

Most government IT solutions were created only with the intention of automating the back office and focusing on efficiency. Requirements were gathered from case workers and then converted into functionality. The resulting IT solution is entirely focused on the internal operating model.

A similar approach has been taken with most government websites, which are often designed based on a government agency’s internal organisational structure, resulting in a poor user experience for citizens. Far too often, citizens start out on a promising home page, only to get lost in the weeds of dead-end pages, incorrect forms, and even other websites as they try in vain to navigate the unfamiliar organisational structure of the government agency.

But citizens no longer live in an analogue world and they’ve run out of patience. They expect governments to present digital citizen services in an easy-to-use, always-on, self-service, personal, and proactive way.

How to deliver a digital government experience

citizen services

To deliver an experience that meets expectations, it’s clear that we need another approach — one centred around citizens. Requirements and functionality should be derived from the behaviour of the citizen as a customer – what information or service they need, what problem they need to solve, how they want to consume the content — and not from the organisational setup.

This also means that the government needs to provide a seamless and transparent interaction across channels. There is no time to develop a single application per service. Instead we must think in terms of platform models, where new services can be introduced quickly on top of existing services and a standard approach used to build applications.

This kind of transformation doesn’t just involve technology; it requires the transformation of the government organisation itself to improve how it provides services to its citizens via digital channels. It requires a strategy that is endorsed by the organisation’s leadership and mandates a transformation toward a new operating model, new capabilities and processes.

Balancing front- and back office digital programs

Back Office Management Software - How it Works and What it does?

Going digital is not just about revisiting current processes and modernising legacy systems. It is also about balancing programs in the front and back office. This is a critical strategic point. Most digital programs are focused on improving the front office, i.e. the websites or apps that citizens interact with. That’s good insofar as it suggests a focus on citizen interactions. However, that model is unsustainable when the back office continues on as before – manual and labour-intensive, using the same legacy applications, and creating a backlog of requests.

Balance your digital programs with these five enablers

Some governments have already got the message and are redesigning their services with this model in mind. In the United Kingdom, for example, the government has published a set of best practices for designing a good citizen experience. The United States is following along the same lines.

Based on these design principles and drawing on our own experiences working with government organisations, Anteelo has identified five key enablers of successful citizen experience transformation:

Use design thinking. Also called human-centred design, design thinking is a creative problem-solving process that makes the citizen the central focus designing a better experience. It is ideal for tackling front-office related aspects.

How to use design thinking to create a happier life for yourself |

Experiment in an agile way. Traditional approaches such as waterfall development take too long to deliver value. An agile, more iterative approach allows for the kinds of experimentation that can lead to process (and application) innovation in both the front- and back-office. This experimentation is a vital component of any digital journey and must be endorsed to get people, processes and technology aligned to optimise the workload.

Product Discovery: ​A Practical Guide for Agile Teams (2021)

Invest to drive automation. Governments can greatly benefit from introducing new technologies to automate administrative tasks and interconnect and then dynamically manage public infrastructure. Back-office applications can benefit from a surge in efficiency in applying RPA for example.

Global Organizations Turning to RPA to Respond to COVID-19 Pandemic - Express Computer

Get new digital capabilities. Having the right capabilities and people with knowledge and experience is key to executing a digital transformation program. No organisation will be able to introduce new technologies and change the operating model if it doesn’t have the right capabilities among its workforce.

7 Capabilities Central To Digital Transformation

Become data-driven. Government organisations that embrace data can transform services and become more predictive, proactive, preventive and personalised. Becoming a data-driven organisation also brings internal value. For one thing, greater efficiency means better utilisation of resources. Most of all, it brings value to citizens’ experiences by better understanding their behaviour and engaging them in meaningful interactions.

Citizen services

 These enablers, of course, only describe a few key pieces of a more complex puzzle. We explore each enabler in considerably more depth and show how to turn each into concrete actions that drive better citizen outcomes, in our new white paper, Five enablers for governments to serve today’s digital citizens.

error: Content is protected !!