The impact of low-code on Software Development

low-code in software development

An emerging way to program, known as low-code application development, is transforming the way we create software. With this new approach, we’re creating applications faster and more flexibly than ever before. What’s more, this work is being done by teams where up to three-quarters of the team members can have no prior experience in developing software.

Low-code development is one of the tools we deploy for our application services and solutions, which is a key component of our Enterprise Technology Stack. The approach is gaining traction. Research firm Gartner recently predicted that the worldwide market for low-code development technology will this year hit $13.8 billion, an increase over last year of nearly 23%. This rising demand for low-code technology is being driven by several factors, including the surge in remote development we have seen over the last year, digital disruption, hyper-automation and the rise of so-called composable businesses.

Low-code platforms are especially valuable to the public sector. Government IT groups need to be innovative and agile, yet they often struggle to be sufficiently responsive. Traditionally, they’ve developed applications using hard coding. While this approach offers a great deal of customization, it typically comes at the cost of long development times and high budgets. By comparison, low-code development is far faster, more agile and less costly.

With low-code platforms, public-sector developers no longer write all software code manually. Instead, they use visual “point-and-click” modeling tools — typically offered as a service in the cloud — to assemble web and mobile applications used by citizens. “Citizen developers” are a new breed of low-code programmers, who are potentially themselves public-sector end users, often with no prior development experience. The technology is relatively easy to learn and use.

Low-code tools are not appropriate for all projects. They’re best for developing software that involves limited volumes, simple workflows, straightforward processes and a predictable number of users. For these kinds of projects, we estimate that up to 80% of the development work can be done by citizen developers.

Experienced developers can benefit from using low-code tools, too. Over my own career of more than 30 years, I used traditional development methods three times to laboriously and methodically develop a mobile app. Now that I’ve adopted speedy low-code tools, I’ve already developed nine try-out mobile apps in just the last year.

Apps in a snap

To get a sense of just how quick working with low-code tools can be, consider a project we recently completed for the Government of Flanders. The project involves 96 vaccination centers the government is opening across Flanders. To track the centers’ inventories of vaccine doses and associated supplies, the government demanded a custom software application.

After a classical logistic software vendor passed on the project. We held our official kick-off meeting on Feb. 1, and just 18 days later, not only was our low-code inventory application up and running (with only a few open issues resolved in the subsequent days), but also the government’s first vaccination center was open for service. There’s no way we could have developed the application that quickly using traditional hard coding.

Low-code development can also be done with minimal staffing because all the ‘heavy lifting’ is done by the low-code environment. However, this is not an excuse to put new employees in charge of critical applications. Experienced staff are still needed to solve the classical issues of design, change management, planning, licenses, support, scoping and contracting.

We developed the Belgian vaccine inventory application with a code developer team of just two junior developers — one with the company for four months, the other for eight — working full time on the application. They were steered and supported by staff with more classical roles: a seasoned analyst, representative from the government, project manager, low-code (Microsoft) expert and solution architect. For these staff members, the low-code approach consumed only about 40% of their time.

Of course, we also leveraged agreements the government already had around Office 365 and Azure. But that’s yet another advantage of low-code software: It can exploit existing IT investments.

Low-Code Or No-Code Development

Finding flexibility

Flexibility is another big advantage of low-code development. In the context of software development, this means being able to make quick mid-course corrections. With traditional development tools, responding quickly to surprises that inevitably arise can be difficult if not impossible. But with low-code tools, making mid-course corrections is actually part of the original plan.

We had to make a mid-course correction when using low-code tools to develop an application to translate official government documents into all 18 languages spoken within Belgium.

Our first version of the software, despite using a reputable cognitive translation service, was missing three languages and providing sub-standard quality for another two. To fix this, our citizen developer — another new hire with no previous technical background — essentially clicked his way through the software’s workflows, integrated a second cognitive service from another supplier, and then created a dashboard indicating which of the two services the software should use when translating to a particular language.

Low-code development tools, when used in the right context, are fast, flexible and accessible to even first-time developers. And while these tools have their limitations and pitfalls, that’s seldom an excuse for not using them. Low-code tools are transforming how the public sector develops software, and we expect to see a lot more of it soon.

A better approach to Data Management, from Lakes to Watersheds

 

data managementAs a data scientist, I have a vested interest in how data is managed in systems. After all, better data management means I can bring more value to the table. But I’ve come to learn, it’s not how an individual system manages data but how well the enterprise, holistically, manages data that amplifies the value of a data scientist.

Many organizations today create data lakes to support the work of data scientists and analytics. At the most basic level, data lakes are big places to store lots of data. Instead of searching for needed data across enterprise servers, users pour copies into one repository – with one access point, one set of firewall rules (at least to get in), one password (hallelujah) … just ONE for a whole bunch of things.

Data scientists and Big Data folks love this; the more data, the better. And enterprises feel an urgency to get everyone to participate and send all data to the data lake. But, this doesn’t solve the problem of holistic data management. What happens, after all, when people keep copies of data that are not in sync? Which version becomes the “right” data source, or the best one?

If everyone is pouring in everything they have, how do you know what’s good vs. what’s, well, scum?

I’m not pointing out anything new here. Data governance is a known issue with data lakes, but lots of things relegated to “known issues” never get resolved. Known issues are unfun and unsexy to work on, so they get tabled, back-burnered, set aside.

Organizations usually have good intentions to go back and address known issues at some point, but too often, these challenges end up paving the road to Technical Debt Hell. Or, in the case of data lakes, making the lake so dirty that people stop trusting it.

To avoid this scenario, we need to go the source and expand our mental model from talking about systems that collect data, like data lakes, to talking about systems that support the flow of data. I propose a different mental model: data watersheds.

In North America, we use the term “watershed” to refer to drainage basins that encompass all waters that flow into a river and, ultimately, into the ocean or a lake. With this frame of reference, let’s contrast this “data flow” model to a traditional collection model.

In a data collection model, data analytics professionals work to get all enterprise systems contributing their raw data to a data lake. This is good, because it connects what was once systematically disconnected and makes it available at a critical mass, enabling comparative and predictive analytics. However, this data remains contextually disconnected.

Here is an extremely simplified view of four potential systematically and contextually disconnected enterprise systems: Customer Relationship Management (CRM), Finance/Accounting, Human Resources Information System (HRIS), and Supply Chain Management (SCM).

CRM Finance/Accounting HRIS SCM
Stores full client name and system generated client IDs Stores abbreviated customer names (tool has a too-short character limit though) and customer account numbers Stores all employee names, employee IDs
Stores products purchased; field manually updated by account manager Stores a list of all company Locations; uses 3-digit country codes Stores a list of all company locations with employee assignments; uses 2-digit country codes Maintains product list and system- generated product ID
Stores account manager names Stores abbreviated vendor names (same too-short character limit), vendor account numbers and vendor IDs with three leading zeros. Stores vendor names, vendor account numbers, vendor IDs (no leading zeros)
Stores Business Unit (BU) names and BU IDs Stores material IDs and names
Goal: Enable each account manager to track the product/contract history of each client Goal: Track all income, expenses and assets of the company Goal: Manage key details on employees Goal: Track all vendors, materials from vendors, Work in Progress (WIP), and final products

 

Let’s assume that each system has captured data to support its own reporting and then sends daily copies to a data lake. That means four major enterprise systems have figured out multiple privacy and security requirements to contribute to the data lake. I would consider this a successful data collection model.

Note, however, that the four systems have overlap in field names, and the content in each area is just a little off — not so far as to make the data unusable, but enough to make it difficult. (I also intentionally left out a good connection between CRM Clients and Finance/Accounting Customers in my example, because stuff like that happens when systems are managed individually. And while various Extract, Transform and Load (ETL) tools or Semantic layers could help, this is beyond CRM Client = Finance/Accounting Customer.)

If you think about customer lists, it’s not unreasonable for there to be hundreds, if not thousands, of customer records that, in this example, need to be reconciled with client names. This will have a significant impact on analytics.

Take an ad hoc operational example: Suppose a vendor can only provide half of the materials they normally provide for a key product. The company wants to prioritize delivery to customers who pay early, and they want to have account managers call all others and warn them of a delay. That should be easy to do, but because we are missing context between CRM and Finance/Accounting, and the CRM system is manually updated with products purchased, some poor employee will be staying late to do a lot of reconciling and create that context after the fact.

I’ve heard plenty of data professionals comment something like, “I spend 90% of my time cleaning data and 10% analyzing it on a project.” And the responses I hear are not, “Whaaaa?? You’re doing something wrong.” They are, “Oh man, I sooooo know what you mean.”

Whaaaa?? We’re doing something wrong.

The time analytics professionals spend cleaning and stitching data together is time not spent discovering correlations, connections and/or causation indicators that turn data into information and knowledge. This is ridiculous because today’s technologies can do so much of this work for us.

The point of a data watershed approach is to eliminate the missing context. The data watershed is not a technical model for how to get data into a lake; it’s a governance/technical model that ensures data has context when it enters a source system, and that context flows into the data lake.

If we return to my four example systems and take a watershed approach, the interaction looks more like this, with the arrows indicating how the data feeds each system:

Without data management, forget AI and machine learning in health care - Government Data Connection

While many organizations do have data flowing from system to system, they often don’t have connections between every system. Additionally, it’s not always clear who should “own” the master list for a field.

In my view, the system that maintains the most metadata around a field is the system that “owns” the master data for that field. So, in my example above, both the HR and Finance/Accounting systems maintain Location lists, but they use different country codes. Finance/Accounting is either going to maintain depreciation schedules or lease agreements on the locations, as well, thus Finance/Accounting wins. The HRIS system, unless there is a tool limitation, should mirror and, preferably, be fed the location data from the Finance/Accounting system.

In this example, when each system sends its data to a data lake, it has natural context. Data analytics professionals can grab any field and know the data is going to match – though I would argue that best practice would be to use the field from the “master” system. However, if everything is working right, this should be irrelevant.

Since a data watershed is a governance/technical model, it addresses, not just how data flows, but how it’s governed. This stewardship requires cross-departmental collaboration and accountability. The processes are neither new nor necessarily difficult – but the execution can be complex. The result is worth the effort though, as all enterprise data supports advanced analytics.

The governance model I picture is an amalgamation of DevOps – the merging of software development and IT operations – and the United Federation of Planets (UFP) from “Star Trek.”

By putting data management and data analytics together in the same way the industry has combined software developers and IT operations, there is less opportunity for conflicting priorities. And, any differences must be reconciled if the project hopes to succeed.

After borrowing from the DevOps paradigm, the reason the governance model I like best is the UFP – and not just because I get to drop a Trekkie reference – is because it is the government of a large fictional universe, built on the best practices and known failures of our own individual government structures.

The UFP has a central leadership body, an advising cabinet and semiautonomous member states. I think this set up is flexible enough to work with multiple organizational designs and enables holistic data management while addressing the nuances of individual systems.

I would expect the “President of the Federation” to be a Chief Information, Technology, Data, Analytics, etc. Officer. The “Cabinet” would be made up of Master Data Management (MDM), Records and Retention, Legal, HR, IT Operations, etc. And the “Council” members would be the analytics professionals from all the data-generating and -consuming business units in the organization.

And, it’s this last part – a sort of Vulcan Bill of Rights – I feel the strongest about:

Whoever is responsible for providing the analytics should be included in the governance of the data. Those who have felt the pain of munging data, know what needs to change – and they need to be empowered to change it.

Data watersheds represent an important shift in thinking. By expanding the data lake model to include the management of enterprise data at its source, we change the conversation to include data governance in the same breath as data analytics — always.

With this approach, data governance isn’t a “known issue” to be addressed by some and tabled by others; it’s an integral part of the paradigm. And while it may take more work to implement at the outset, the dividends from making the commitment are immense: Data in context.

From hysteria to reality, Risk-Based Transformation (RBT)

Risk based transformation

The digital movement is real. Consumers now possess more content at their fingertips than ever before, and it has impacted how we do business. Companies like Airbnb, Uber and Waze have disrupted typical business models, forcing established players in different industries to find ways to stay relevant in the ever-emerging digital age. This post is not about that. Well, not in the strictest sense.  There are countless articles explaining the value of being digital. On the other hand, there are very few articles about how to get there. Let’s explore how to get there together, through an approach that I have named Risk-Based Transformation. RBT’s strength is that it puts technology, application, information and business into one equation.

An approach that fits your specific needs

I’m relocating very soon, and with that comes the joys of a cross-country journey. Being the planning type, I started plotting my journey. I didn’t really know how to start, so I went to various websites to calculate drive times. I even found one that would give you a suggested route based on a number of inputs. These were great tools but they were not able to account for some of my real struggles, like how far is too far to drive with a 5- and 3-year-old.

Where are the best rest stops where we can “burn” energy — ones that have a playground or a place to run? (After being cooped up in a car for hours, getting exercise is important!) How about family- and pet-friendly places to visit along the way to break up the trip? What about the zig-zag visits we need to make to see family?

The list goes on. So while I was able to use these tools to create a route, it wasn’t one that really addressed any of the questions that were on my mind. Organizations of all sizes and across all industries are on this digital journey but often the map to get there is too broad, too generic, and doesn’t provide a clear path based on your unique needs.

A different approach is needed, one in which you can benefit from the experience of others, whilst taking the uniqueness of your business into account. Like planning a trip, it’s good to use outside views in particular to give that wider industry view; however, that’s only a piece of the puzzle. Each business has its own culture, struggles and goals that bring a unique perspective.

RBT framework

To help with this process, I have created a framework for RBT. At a high level, RBT takes into account your current technology (infrastructure), application footprint, value of the information, and risk to the business. From left to right, the least weight to the highest. This framework gives a sense as to where to where to start and where the smart spend is. See flow below:

Risk-Based Transformation

Following this left to right, you can add or remove evaluation factors to this based on your needs. Each chevron has a particular view, in a vacuum if you will, so the technology is rated based only on itself. It will gain its context as you move through each chevron. This will give you a final score. The higher the score, the higher the risk to the business.

Depending on your circumstances, you can approach it David Letterman style and take your top 10 list of transformation candidates and run it through the next logic flow (watch for a future blog on how to determine treatment methodology). Or, as we did with a client recently, you can start with your top 50 applications. The point is to get to a place that enables you to start making informed next steps that meet your needs and budget to get the most “bang” for your investment.

The idea behind this framework is to use data in the right context to present an informed view. For example, you can build your questionnaires on SharePoint or Slack or another collaboration platform that also allows the creation of dashboard views. You can build dashboards in Excel, Access, MySQL or whatever technology you’re comfortable with in order to build an informed data-driven view, evaluating risk against transformation objectives. The key is that you need to assign values to questions in order to calculate consistent measurements across the board.

Service management example

Let’s take service management as an example. Up front you would need to determine what “good” looks like, and then based on that, have questions like these below answered:

Service Management

These questions could be answered by IT support, application support, business owners, the life cycle management group, or other relevant groups. When we ran through the first iterations of this framework, we had our client fill it out first. Then we filled it out based on our data points. Our data points looked different, as it was an outsourcing client in which we owned their IT. We had the view in a vacuum of what we had both inherited, the equipment from the existing estate that had been transferred to us, and what we had newly built.

We also had access to the systems that the client did not have, as they no longer had root access to these systems. The client’s context included future plans for life cycle, as they owned life cycle management. With those combined views, we had a broader sense of the environment. This methodology could be used with business units, allowing them to give their view of these systems, which gave IT an even more rounded view because it enabled us to see how the client (business) saw their environment versus how we (IT support) saw it.

That data was then normalized to give a joint view for senior leadership. The idea is that this became a jointly owned view, backed up with data, of the way forward that IT leadership could confidently stand behind. The interesting part is although the age of the server estate was 5 – 10 years old, we realized that upgrading the infrastructure was not the smartest place to start. In fact, the actual hardware was determined to be the lowest risk. The highest risk was storage, which was quite a surprise to all.

RiskBased Transformation

A living framework

Many years ago, when you plotted your cross-country drive on your map it was based on information from a fixed time. This was the best route when I drew the line on the map. Now, personal navigation devices hooked into real-time data change that course based on current conditions. In the same way, the RBT model is a living framework; it should have regular iterations in order to have course corrections as you go forward.

The intent with this framework and thinking is to build a context that makes sense for your needs, and then present data in context that allows for better planning. That better planning should lead to a more efficient digital journey as we all continue to stay with, or ahead of, the curve.

If you have enjoyed this, look forward to my next post. There I will detail how the RBT framework is applied and the treatment buckets methodology.

Insurers’ appreciation for orthogonal data

Orthogonal data

It is anticipated that within the next three years, on average every human being on the planet will create about 1.7 megabytes of new information every second. This includes 40,000 Google searches every second, 31 million Facebook messages every minute, and over 400,000 hours of new YouTube videos every day.

At first glance, the importance of this data may not be obvious. But for the insurance industry, tapping into this and other kinds of orthogonal (statistically independent) data is key to finding new ways to create value.

A clearer picture of individual risk

By paying closer attention to the data people create as part of their everyday lives, insurance companies can better anticipate needs, personalize offers, tailor customer experience and streamline claims. Using a wider variety of information is especially useful in better understanding and managing individual risks. For instance, behavior data from sensors, shared through an opt-in customer engagement program, provides insurers with the insight needed to initially assess and price the risk, and mitigate or even prevent subsequent losses.

Take, for example, the use of telematics data from sensors embedded in cars and smartphones. When shared, the raw telemetry data provides insurers with insight into an individual’s actual driving behaviors and patterns. Insurers can reward lower-risk drivers with discounts or rebates while providing education and real-time feedback to help improve the risk profile of higher-risk drivers. Geofencing and other location-based services can further enhance day-to-day customer engagement. In the event of an accident, that same sensor data can be used to initiate an automated FNOL (first notice of loss), initially assess vehicle damage, and digitally recreate and visualize events before, during and after the crash.

Using individual driver behavior to monitor and manage risk is just one way to leverage orthogonal data in insurance. Ultimately, new behavioral and lifestyle data sources have the potential to transform every aspect of the insurance value chain. Forward-looking insurers will tap into these emerging data sources to drive product innovation, deepen customer engagement, improve safety and well-being and even prevent insured losses. For those who invest in the platforms and tools needed to harness the value of orthogonal data, the advantages will be significant.

error: Content is protected !!