Significance of “design for operations” approach for service-based IT

Service based IT companies

To deliver on digital transformation and improve business performance, enterprises are adopting a “design for operations” approach to software development and delivery. By “design for operations” we mean that software is designed to run continuously, with frequent incremental updates that can be made at scale. The approach takes into consideration the end-to-end costs of delivering and servicing the software, not just the initial development costs. It is based on applying intelligent automation at scale and connecting ever-changing customer needs to automated IT infrastructure. DevOps is the set of practices that do this, enabled by software pipelines that support Continuous Delivery.

 Design Operations

The challenge: Design for operations

Products and services pass through various stages of design evolution:

  • design for purpose (the product performs a specific function)
  • design for manufacture (the product can be mass produced)
  • design for operations (the product encompasses ongoing use and the full product life cycle)

Automobiles are a good example: from Daimler’s horseless carriage, to Ford’s Model T and finally to Toyota’s Prius (or anything else that’s sold with a service plan). Including the service plan means the auto maker incurs the costs of servicing the car after it’s purchased, so the auto maker is now responsible for the end-to-end life cycle of the car. Information technology is no different — from early code-breaking computers like Colossus, to packaged software such as Oracle, and then to software-based services like Netflix.

The key point is that software-based services companies like Netflix have figured out that they own the end-to-end cost of delivering their software, and have optimized accordingly, using practices we now call DevOps.

There are efficiencies that can be achieved only with software designed for operations. This means that companies running bespoke software (designed for purpose) and packaged software (designed for manufacture) have a maturity gap, where the liability is greater than the value. If that gap can be closed, delivery can be better, faster and cheaper (no need to pick just two).

It’s essential to close that gap, because if competitors can deliver better, faster and cheaper, that puts them at an advantage. This even includes the public sector, since government departments, agencies and local authorities are all under pressure to deliver higher quality services to citizens with lower impact on taxation.

The reason we “shift left”

A typical outcome of the design-for-purpose approach is that functional requirements (what the software should do) are pursued over nonfunctional requirements (security, compliance, usability, maintainability). As a result, things like security get bolted on later. In many cases, this lack of functionality starts to accrue as technical debt — that is, decisions that may seem expedient in the short term become costly in the longer term.

The concept of “shifting left” is about ensuring that all requirements are included in the design process from the beginning. Think of a project timeline and “shifting left” the items in the timeline, such as security and testing, so they happen sooner. In practice, that doesn’t have to mean lots of extra development work, as careful choices of platforms and frameworks can ensure that aspects such as security are baked in from the beginning.

A good example of contemporary development practices that support this is manifested when we ask, “How do we know that this application is performing to expectations in the production environment?” This moves way past “Does it work?” and starts asking “How might it not work, and how will we know?”

Enterprises need to adopt a “design for operations” model that includes a comprehensive approach to intelligent automation that combines analytics, lean techniques and automation capabilities. This approach produces greater insights, speed and efficiency and enables service-based solutions that are operational on Day 1.

All about Operationalized Analytics

Operationalizing Analytics

Organizations with a high “Analytics IQ” have strategy, culture and continuous-improvement processes that help them identify and develop new digital business models. Powering these capabilities is the organization’s move from ad hoc to operationalized analytics.

Seamless data flow

Operationalized analytics is the interoperation of multiple disciplines to support the seamless flow of data, from initial analytic discovery to embedding predictive and prescriptive analytics into organizational operations, applications and machines. The impact of the embedded analytics is then measured, monitored and further analyzed to circle back to new analytics discoveries in a continuous improvement loop, much like a fully matured industrial process.

An example of operationalized analytics is the industrialized AI utility depicted below. It enables automatic access and collection of data, ingesting and cleaning of the data, agile experimentation through automated execution of algorithms, and generation of insights.

DataOps

 

Operationalized analytics builds on hybrid data management (HDM), an HDM reference architecture (HDM-RA), and an industrialized analytics and AI platform to enable organizations to implement industrial-strength analytics as a foundation of their digital transformation.

Operationalized analytics encompasses the following:

  • Data discovery includes the data discovery environment, methods, technologies and processes to support rapid self-service data sharing, analytics experimentation, model building, and generation of information insights.
  • Analytics production and management focuses on the processes required to support rigorous treatment and ongoing management of analytics models and analytics intellectual property as competitive assets.
  • Decision management provides a clear understanding of, and access to, the information needed to augment decision making at the right time, in the right place and in the right format.
  • Application integration incorporates analytics models into enterprise applications, including customer relationship management (CRM), enterprise resource planning (ERP), marketing automation, financial systems and more.
  • Information delivery of relevant and timely analytics information to the right users, at the right time and in the right format is enabled by self-service analytics and data preparation. This improves the ease and speed with which organizations can visualize and uncover insights for better decision making.
  • Analytics governance is the set of multidisciplinary structures, policies, procedures, processes and controls for managing information and analytics models at an enterprise level to support an organization’s regulatory, legal, risk, environmental and operational requirements.
  • Analytics culture is key, as crossing the chasm from ad hoc analytics projects to analytics models integrated into front-line operations requires a cultural shift. Merely having a strong team of data scientists and a great technology platform will not make an impact unless the overall organization also understands the benefits of analytics and embraces the change management required to implement analytically driven decisions.
  • DataOps is an emerging practice that brings together specialists in data science, data engineering software development, and operations to align development of data-intensive applications with business objectives and to shorten development cycles. DataOps is a new people, process and tools paradigm that promotes repeatability, productivity, agility and self-service while achieving continuous analytics model and solutions deployments. DataOps further raises Analytics IQ by enabling faster delivery of analytics solutions with predictable business outcomes
error: Content is protected !!