Beyond owning: From conventional to unconventional analytics data

Shale Oil & Gas Production & Completion Data | unconventional resources production analytics

The scale of big data, data deluge, 4Vs of data, and all that’s in between… We’ve all heard so many words adjectivized to “Data”. And the many reports and literature have taken the vocabulary and interpretation of data to a whole new level. As a result, the marketplace is split into exaggerators, implementers, and disruptors. Which one are you?

Picture this! A telecom giant decides to invest in opening 200 physical stores in 2017. How do they go about solving this problem? How do they decide the most optimal location? Which neighbourhood will garner maximum footfall and conversion?

And then there is a leading CPG player trying to figure out where they should deploy their ice cream trikes. Now mind you, we are talking impulse purchase of perishable goods. How do they decide the number of trikes that must be deployed and where, what are the flavours that will work best in each region?

In the two examples, if the enterprises were to make decisions based on the analytics data available to them (read owned data), they would make the same mistakes day in and day out – of using past analytics data to make present decisions and future investments. The effect of it stares at you in the face; your view of true market potentials remains skewed, your understanding of customer sentiments is obsolete, and your ROI will seldom go beyond your baseline estimates. And then you are vulnerable to competition. Calculated risks become too calculated to game change.

Disruption in current times posits enterprises to undergo a paradigm shift; from owning data to seeking it. This transition requires a conscious set-up:

Power of unconstrained thinking

The Power of the Wandering Mind - WSJ

As adults, we are usually too constrained by what we know. We have our jitters when it comes to stepping out of our comfort zones – preventing us from venturing into the wild. The real learning though – in life, analytics or any other field for that matter – happens in the wild. To capitalize on this avenue, individuals and enterprises need to cultivate an almost child-like, inhibition-free culture of ‘unconstrained thinking’.

Each time we are confronted with unconventional business problems, pause and ask yourself: If I had unconstrained access to all the data in the world, how would my solution design change; What data (imagined or real) would I require to execute the new design?

Power of approximate reality

A Theory of Reality as More Than the Sum of Its Parts | Quanta Magazine

There is a lot we don’t know and will never know with 100% accuracy. However, this has never stopped the doers from disrupting the world. Unconstrained thinking needs to meet approximate reality to bear tangible outcomes.

Question to ask here would be – What are the nearest available approximations of all the data streams I dreamt off in my unconstrained ideation?

You will be amazed at the outcome. For example, the use of Yelp to identify the hyperlocal affluence of catchment population (resident as well as moving population), estimating the footfall in your competitor stores by analysing data captured from several thousand feet in the air.

This is the power of combining unconstrained thinking and approximate reality. The possibilities are limitless.

Filter to differentiate signal from noise – Data Triangulation

Triangulation of Data | Blended & Personalized Learning Practices At Work

Remember, you are no longer as smart as the data you own, rather the data you earn and seek. But at a time when analytics data is in abundance and streaming, the bigger decision to make while seeking data is identifying “data of relevance”. An ability to filter signals from noise will be critical here. In the absence of on-ground validation, Triangulation is the way to go.

The Data ‘purists’ among us would debate this approach of triangulation. But welcome to the world of data you don’t own. Here, some conventions will need to be broken and mindsets need to be shifted. We at Anteelo have found data triangulation to be one of the most reliable ways to validate the veracity of your unfamiliar and un-vouched data sources.

Ability to tame the wild data

Data in the wild | ACM Interactions

Unfortunately, old wine in a new bottle will not taste too good. When you explore data in the wild – beyond the enterprise firewalls – conventional wisdom and experience will not suffice. Your data scientist teams need to be endowed with unique capabilities and technological know-how to harness the power of data from unconventional sources. In the two examples mentioned above – of the telecom giant and CPG player – our data scientist team capitalized on the freely available hyperlocal data to conjure up a great solution for location optimization; from the data residing in Google maps, Yelp, and satellites.

Having worked with multiple clients, across industries, we have come to realize the power of this approach – of owned and seeking data; with no compromise on data integrity, security, and governance. After all, game changer and disruptors are seldom followers; rather they pave their own path and chose to find the needle in the haystack, as well!

Learn these 5 scaling RPA secrets to transform your organization

Robotic Process Automation (RPA) for Manufacturing: driving efficiency further - The Manufacturer

With robotic process automation (RPA) pilots almost everywhere, creating industrial scale has emerged as the new challenge for IT departments and shared service centres (SSCs) alike. The organisation, processes, tooling and infrastructure required to quickly develop a few in-house robots cannot simply be incremented at scale. Enterprises need to re-design their entire approach.According to a survey by HFS Research, the biggest gap in RPA services capabilities is not in RPA planning and implementation, but rather in post-implementation.

Here are five ways to meet the key challenges we hear from both IT and SSC executives about scaling robotics:

Top 5 RPA Questions that Customers Ask & Our Answers - CiGen | Robotic Process Automation | RPA

  1. Begin with the end in mind and start looking at an operating model strategy that supports the bots where they will eventually be running. However, whether you manage the bots from a centralised production environment or on agents’ desktops, there is no way around the IT department once you’ve decided to scale robots. IT departments have to make technical resources and support staff available; manage the configuration, software distribution and robot scripts; provide and maintain security access; plus track and respond to incidents. Unfortunately, it takes time and effort to configure such processes, and IT can have more pressing priorities, but presenting a clear operating strategy can help spur them on.
  2. If a business continuity plan has not yet been devised, it needs to be. If systems go down, the bots need to be re-started along with the entire software stack. Some organisations create mirrored environments they can switch to in case of extended system failures.
  3. Leverage the cloud. Cloud is generally acknowledged as the way forward for large-scale RPA operations. Cloud makes it possible to provision extra bots with one click, for example, to address sudden peaks in transactions. Cloud also enables efficient, consumption-based models. However, some large enterprises have ring-fenced clouds due to regulations in critical industries, such as in defence or banking, and this needs to be considered.
  4. Bring corporate security policies into force. Can hundreds of robots running in parallel access all corporate systems that require a human being’s credentials? They do not have an address, ID badge, a manager, an office, or a birth date – which may be mandatory to comply with existing corporate security policies. Corporate security policies need to reflect the new complexities.
  5. Realize that constant change is a rule, not the exception, for bots. Some companies leave the technical changes to IT, but manage the functional changes in the business units (finance, human resources, etc.) that own the business process, and this approach does provide more speed to resolution. For the same reason, the relevant business units can also maintain re-usable libraries of standard information. Things become more complex, though, when third parties are in the picture, like tool vendors, RPA consultants and/or business process outsourcing (BPO) providers. In fact, governance is most often cited by IT and SSC leaders as a key challenge here. It is common that RPA investments do not progress past development and test phases due to governance roadblocks. A preferred approach tends to be establishing a centre of excellence — typically within the enterprise SSC organisation — with responsibility over the policies, governance and tool/vendor selection for RPA. Still, once bots take a significant share of workload from human agents, does it make sense to keep the SSC and IT under separate organisations? And also, what is the impact on the human workforce? As we train bots to act as humans, businesses need to train and acclimatize their human workforce to co-operate with bots, understand how they operate and where to intervene.

In summary, RPA is a very hot topic currently and whilst a lot of the hype these days is around enabling technologies to accelerate the development of robots, the real challenge in scaling up your RPA digital workforce lies in better operating model design, a flexible cloud-based platform and of course, better appreciation of human nature.

How to Pitch Design Ideas to Clients like a Pro!

How to Pitch Your Design Work | Made by Sidecar | By designers. For designers.

Effective design is the best sales pitch! Design is good when it serves a purpose and turns a few heads, but it becomes phenomenal when it can twirl your client by the pixel. And this is where most designers face a roadblock. The only problem is, they somehow fail to associate “selling” with designing. And for those who don’t fall into that category, are most probably doing it wrong.We, designers come across a wide variety of clients to appease. Some of them turn out to be quite friendly and supportive, who hands over the liberty to the project in a barrel with other important stuff you might need to know. But some are more specific about their requirements and prefer to keep the freedom under a leash. Whoever we work with, the bottom-line remains the same: ideas don’t sell themselves. The key is to adapt to the ‘sales strategy’ to suit the customer. These are soft skills every designer must have!

Playing the role of an effective virtual tour guide isn’t a cakewalk, but I have for you, a few valuable and time-tested skills to help you add muscle to your selling.

Here are some pointers that you can mobilize to sell the design to your clients.

1. Know your Client : Get Talking

The Freelancer's 9-Step Guide to Convincing Clients to Hire You - Skillcrush

The number one rule of sales is getting to know your customer. This is where all the magic happens. It always starts with a string of conversations. The trick is to not let the thread go cold. At the start of a project, gather as much information about the client as possible. This will serve you well in the future in navigating through what actually matters to your client.  You can ask about their city, (a classic conversation starter), the weather may be, or about their likes and so on. And if you hit the right buttons, you would be amazed at what a simple conversation can uncork about your client’s design preferences, unless of course, you are Sherlock Holmes. Here’s what happens when you get talking:

  • You would get a clearer picture of what your client would prefer in your design.
  • A friendly conversation establishes trust. And once your client begins to trust you, the restrictions fall apart giving way for a fair amount of liberty on the projects you’re handling.
  • Once clients feel comfortable working with you, 80% of your pitching is done. They would start taking your designs more seriously and who knows, their next string of projects might have your name on them.
  • Establishing a relationship with the client is a fundamental precursor to pitching design ideas to them. It always gets them listening and responding more positively to your ideas.

2. DO YOUR HOMEWORK: GAIN CREDIBILITY

How to Pitch Creative Ideas to your Clients | Honchō

Decision making in design can be a bit challenging. It is not like throwing in variables in a formula to get to the right answer. Therefore, there’s always room for error. And this is why you need to have an answer for everything you do because rest assured there will be questions!

The business of design dictates that there exists logical reasoning for every UI/ UX move you make. There needs to be a reason for your chosen palette of colors or, your one-page layout preference. Backing your ideas up with concrete statistics is the way to go. A little bit of research goes a long way. It is always advisable to have complete knowledge of the amazing solution you are about to present since this dramatically reduces the chances of skewing up the thought process. This way, you can let the data talk for itself. And clients seldom argue with data.

However, where data falls short, big players come in handy. Another way to gain credibility is by making examples out of well-recognized names in the market. Think of this as a simple hack to the path of least resistance. If your idea coincides with Google’s, to some extent, then that should definitely be a part of your pitching strategy. This little information can open up doors you never thought existed. The bottom line is, clients will have a lot of queries, and you need to have all the answers ready to make for a smooth design selling work-day!

3. KNOW THE TRENDS: DESIGN FOR THE FUTURE

Future Trends in Graphic Design for Your Website

Don’t just be a great designer, be a smart one. We happen to live in a world where nothing is constant, except change. And when it comes to design, change is what pulls the wagon.

The next time you have a design intervention, do quick trend research. Make yourself aware of the big trends in the market and find out the ones that will stick. You can incorporate those in your designs and make it work. Thinking out of the box is a gift, but thinking smart is an acquired taste. Whatever you do, keep in mind that there’s a difference between an unprecedented risk and well-thought-out-and-researched one. You would definitely need to avoid the former.

If you look closely, you would find that there exist two broad kinds of designers, the trend-setters and the trend-followers. Who do you want to be?

4. PRESENTATION, PRESENTATION, PRESENTATION

51 Best Presentation Slides for Engaging Presentations (2021)

Even the best, path-breaking, award-winning ideas need a good presentation to get them out of the shed. Which is why, in order to sell your design ideas effectively, you will need more than a few sketches or words.

Consider making a pitch desk that communicates your ideas in a way that catches the client’s imagination. Make sure that they get the bigger picture. While addressing the client, make sure that you put everything in context. Use mockup templates, distribute design samples, go the extra mile. This will help the client visualize what the final design will emulate. The closer your working prototype comes to the real-life design functionality, the closer you will be to sealing the deal.

5. DON’T UNDERESTIMATE YOUR CLIENT: ACCEPT CRITICISM

Customer feedback analysis 101: What's the best way to analyze vast amounts of feedback?

In a profession without absolutes, criticism comes in bountiful. Your work might be your territory, but you need to keep in mind that you have been hired to solve a problem. And how effectively you do it, measures the conversion rate. Your clients may not have all the design know-hows, but they know exactly what they want and how they want it. So it’s best to always stay on top of your game and pitch your design ideas without getting too defensive.

Your clients need to know that you are distilling their design ideas and steering them to the best possible fruition and not taking it as a challenge. So treading with a touch of finesse would be a great idea. Instead of responding “I don’t think this change is required”, you could tone it down to a “While the changes you have suggested are completely do-able, you might find that it already satisfies these requirements, if you re-examine the one I have submitted.”

The manner of accepting the feedback on the design is critical to its final acceptance. You will find yourself in situations, where a positive attitude, attention to detail and an inane ability to address all the pain-points will ensure that the client is more receptive to your version of the final design that otherwise.

Quick Tip: You are on the same side as your client, stop taking it as a challenge.

Summary

With design, you need to keep two things in mind:

  • Less is more
  • It is always better to show than to tell

Even though the thought of “selling” might make you cringe, it is a milestone to achieve to make your designs see the light of the day. Having said that, you need to believe in your pixels and your instincts to see you through the worst because at the end of the day, you are what you present. Figuring out the art of presenting your design ideas, pitching them articulating them proficiently and ‘closing’ the design sale are important skills that will come in handy rather regularly over the course of your career.

The growing popularity of low-code/no-code application development platforms

What is low-code and no-code? A guide to development platforms | ZDNet

“Software is eating the world.” That was the bold proclamation renowned innovator and venture capitalist Marc Andreessen expressed in an article he wrote for the Wall Street Journal. More and more businesses are being run by software, he argued, or are differentiating themselves and disrupting their competitors and industries using the same. Today, that article is regarded as one of the seminal works in shaping how people think about digital transformation—or using digital technologies to bring about great change in the way individuals and organizations think, operate, communicate or collaborate.

Often, at the heart of many digital transformation efforts is the desire to enable the organization to be more agile or responsive to change. This requires looking for ways to dramatically reduce the time needed to develop and deploy software, and simplify and optimize the processes around the maintenance of software so it can be deployed quickly and with greater efficiency.

Another key outcome that is part of many digital transformation efforts is enabling the organization to be more innovative — finding ways to transform how the organization operates and realize dramatic improvements in efficiency or effectiveness; or creating new value by either delivering new products and services or creating new business models.

For organizations using conventional approaches to developing software, this can be a tall order. Developing new applications can take too long or require very specialized and expensive skills that are in short supply or hard to retain. Maintaining existing programs can be daunting as well, as they struggle with increasing complexity and the weight of mounting technical debt.

Universities turning to low code to help bring back students amid COVID-19 - TechRepublic

Enter “low-code” or “no-code” application development platforms. This emerging category of software provides organizations with an easier to understand — often visual — declarative style of software deployment, augmented by a simpler maintenance and deployment model.

Essentially these tools allow developers, or even non-developers, to build applications quickly, easily, and rapidly on an on-going basis. Unlike Rapid Application Development (RAD) tools of the past, they are often offered-as-a-service and accessed via the cloud, with ready integrations to various data sources and other applications (often via RESTful APIs) available out of the box. They also come with integrated tools for application lifecycle management, such as versioning, testing, and deployment.

With these new platforms, organizations can realize three things:

1. Faster time to value

Five ways to achieve faster time to value with a SaaS implementation

The more intuitive nature of these platforms allows organizations to quickly get started and create functional prototypes without having to code from scratch. Pre-built and reusable templates of common application patterns are often provided, allowing developers to create new applications in hours or days, rather than weeks or months. When coupled with agile development approaches, these platforms allow developers to move though the process of ideating , prototyping, testing, releasing and refining more quickly than they would otherwise do with conventional application development approaches.

2. Greater efficiency at scale

10 Ways to Improve Team Efficiency And Productivity | HR Cloud

Low-code/no-code application development platforms allow developers to focus on building the unique or differentiating functionality of their applications and not worry about basic underlying services/functionality such as authentication, user management, data retrieval and manipulation, integration, reporting, device-specific optimization, and others.

These platforms also provide tools for developers to easily manage the user interface, data model, business rules and definitions, making on-going management easy and straightforward. So easy in fact that even less experienced developers can do it themselves, lessening the need for costly or hard-to-find expert developers. These tools also insulate the need for the developer and operations folks to keep updating the frameworks, infrastructure and other underlying technology behind the application, as the platform provider manages these themselves.

3. Innovative Thinking

3 Strategies For Developing Innovative Thinking

Software development is a highly creative and iterative process. Using low-code or no-code development platforms, in combination with user-centric approaches such as design thinking, organizations can rapidly bring an idea to pilot in order to get early user feedback or market validation without spending too much time and effort (so-called “Minimum Viable product” as coined by Eric Ries in his book “The Lean Startup”).

Not only that, because these platforms make it easy to get started, even non-professional developers or “citizen developers,” who more likely than not have a deeper or more intimate understanding of the business and end user or customer needs, can develop the MVP themselves. This allows the organization to translate ideas to action much faster and innovate on a wider scale.

While offering a lot of benefits, low-code/no-code application development platforms are certainly not a wholesale replacement to conventional application development methods (at least not yet). There are still situations where full control of the technology stack can benefit the organization—especially if it’s the anchor or foundation of the business, the source of differentiation, or source of competitive advantage. However, in most cases organizations will benefit from having these types of platforms as part of their toolbox, especially as they embark on any digital transformation journey.

Key Concept Extraction from NLP Anthology (Part 2)

Automated Keyword Extraction from Articles using NLP | by Sowmya Vivek | Analytics Vidhya | Medium

Key Concept Extraction: Intelligent Audio Transcript Analytics Extracting Key Phrases for Scaling Industrial NLP Applications

The COVID‐19 pandemic that hit us last year brought a massive cultural shift, causing millions of people across the world to switch to remote work environments overnight and use various collaboration tools and business applications to overcome communication barriers.

However, this generates humongous amounts of data in audio format. Converting this data to text format provides a massive opportunity for businesses to distill meaningful insights.

One of the essential steps for an in-depth analysis of voice data is ‘Key Concept Extraction,’ which determines the business calls’ main topics. Once the identification is accurately completed, it leads to many downstream applications.

One way to extract key concepts is to use Topic Modelling, which is an unsupervised machine learning technique that clusters words into topics by detecting patterns and recurring words. However, it cannot guarantee precise results and may present many transcription errors when converting audio to text.

Let’s glance at the existing toolkits that can be used for topic modelling.

Some Selected Topic Modelling (TM) Toolkits

  • Stanford TMT : It is designed to help social scientists or researchers analyze massive datasets with a significant textual component and monitor word usage.

Stanford Topic Modeling Toolbox

  • VISTopic : It is a hierarchical visual analytics system for analyzing extensive text collections using hierarchical latent tree models.

VISTopic: A visual analytics system for making sense of large document collections using hierarchical topic modeling - ScienceDirect

  • MALLET : It is a Java-based package that includes sophisticated tools for document classification, NLP, TM, information extraction, and clustering for analyzing large amounts of unlabelled text.

MALLET homepage

  • FiveFilters : It is a free software solution that builds a list of the most relevant terms from any given text in JSON format.

fivefilters (FiveFilters.org) · GitHub

  • Gensim : It is an open-source TM toolkit implemented in Python that leverages unstructured digital texts, data streams, and incremental algorithms to extract semantic topics from documents automatically.

GitHub - RaRe-Technologies/gensim: Topic Modelling for Humans

Anteelo’s AI Center of Excellence (AI CoE)

Our AI CoE team has developed a custom solution for key concept extraction that addresses the challenges we discussed above. The whole pipeline can be broken down into four stages, which follow the “high recall to high precision” system design using a combination of rules and state-of-the-art language models like BERT.

Pipeline:

Intro to Automatic Keyphrase Extraction

1) Phrase extraction : The pipeline starts with basic text pre-processing, eliminating redundancies, lowercasing texts, and so on. Next, use specific rules to extract meaningful phrases from the texts.

2) Noise removal: This stage of the pipeline uses the above-extracted phrases to remove noisy phrases based on signals mentioned below:

  • Named Entity Recognition (NER): Certain NER such as quantity, time, and location type that are most likely to be noise for the given task are dropped from the set of phrases.
  • Stop-words: Dynamically generated list of stop words and phrases obtained from casual talk removal [refer to the first blog of the series for details regarding casual talk removal (CTR) module] are used to identify noisy phrases.
  • IDF: IDF values of phrases are used to remove common recurring phrases, which are part of the usual greetings in an audio call.

3) Phrase normalization: After removing the noise, the pipeline proceeds to combine semantically and syntactically similar phrases. To learn phrase embedding, the module uses state-of-the-art BERT language model and domain trained word embeddings. For example, “Price Efficiency Across Enterprise” and “Business-Venture Cost Optimization” will be clubbed together by this pipeline as they essentially mean the same.

4) Phrase ranking: This is the last and final stage of the pipeline, which ranks the final set of phrases using various metadata such as frequency, number of similar phrases, and linguistic POS patterns. These metadata signals are not comprehensive, and other signals may be added based on any additional data present.

Natural Language Intent Recognition (Part 3) of the NLP Anthology

Rasa NLU in Depth: Intent Classification

Natural Language Intent Recognition: Intelligent Audio Transcript Analytics Using Semantic Analysis to Understand User’s Intent

In the modern business landscape, timing is everything. Quickly identifying user’s intent can help you get a leg up your competition. How? It can enable you to respond actively to a potential customer’s interest and multiply your chances of influencing the key decision-makers through meaningful conversations.

But, if you receive thousands of customer interactions a day, detecting customer intent in your unstructured data is challenging. The good news is that you can automate intent classification with artificial intelligence, so you can identify intent in thousands of emails, social media posts, and more in real-time and prioritize responses to potential customers.

Raise your hand if you’re a business that’s finding it increasingly complex to detect user intent from voluminous unstructured data sets containing long-wielded sentences and juxtaposed multiple objectives. Chances are your hand is up.

The good news is that you now have a solution to this.

What is Intent?

Android Intents - Tutorial

Simply put, refer to anything a user wants to accomplish

Now talking from a technical perspective, we define intent as a single or group of 2-3 contiguous sentences that can solely convey an idea with its necessary context. Extracting the call intent can lead to many downstream applications, such as better content creation and planning.

3 Challenges in Natural Language Intent Recognition

We discussed some challenges in part-1 of this 4-blog series. Here we discuss three more challenges specific to intent recognition (or intent classification).

  • There can be multiple intents present across the call transcript like we discussed in the example above.
  • Differentiation between the client intent and details of the same. In the above example, differentiating between the intent, i.e., to know about the growth percentage or its details.
  • Missing or incorrect punctuations leading to wrong sentences extracted as questions. For example, “I’m not sure what the report says?” having a question as wrong punctuation.

Anteelo’s NL-IR Approach

Pre-processing

Data Preprocessing : Concepts. Introduction to the concepts of Data… | by Pranjal Pandey | Towards Data Science

Cleaning and casual talk removal steps, mentioned in Part 1 of this 4-blog series, are followed to remove the unwanted sentences present in the transcripts. This important step highly affects the output of the next steps.

For instance, “How are you Cathy? How was your vacation?” should not be extracted by the Question Analytics module, which we will explore later in this blog. The intents are present throughout the call; however, we observed that 92% of the time, the intent was in the first half of the call. Hence, we focused on it to increase the precision of the system.

Feature extraction

  • Natural Language Question Extraction: To extract questions that clients ask, we use Anteelo Question Analytics NLP Accelerator that follows a hybrid approach ( combination of both rule-based and supervised approach). The rule-based approach leverages 5W-1H words and four generalized POS tag and Dependency parser patterns to detect the starting point of the interrogative part, if available, in a sentence. The supervised classifier was trained on ~100k questions’ data.

The Main Approaches to Natural Language Processing Tasks - KDnuggets

  • Constraints-based Intent Sentence Extraction: Identify the objectives of the client that are not conveyed in the form of questions. Intent identification is done by skip-gram matching of two generalized Dependency parser patterns.
  • Contiguity Sentence Extraction: We also extract the important sentences after the first extraction step to provide more context. The following sentences were extracted if they are tightly coupled with the preceding sentences identified by the conjunctions and other identifiers

Intent Segments Formation

Primarily, the intent segments are formed combining the contiguous sentences extracted by the methods stated above. However, simply combining the contiguous sentences can lead to many sentences in a segment that would decrease the system’s effectiveness.

The system divides the obtained segment into subsets with the least deviation in the number of sentences in each subset segment and with a maximum of 6 sentences in a subset segment. The splitting is done considering the continuity and similarity of sentences. These split segments are considered as final intent segments that will be fed into the next module.

Natural Language Intent Ranking

Natural Language Processing | kore.ai

This module will rank the intent segments obtained from the above module. We use multiple signals to rank these segments.

  • Topics: Used topics obtained from Key Concept Extraction mentioned in Part 2 of this blog series to boost the segments’ scores containing these concepts.
  • Number of questions: Improve the score of the segments having a high number of questions.
  • Importance of paragraph: Giving higher weightage to the segments in the bigger paragraph, having vital information.
  • Summarization: Boost the score of the segments having TextRank + Bert summary sentences.

More signals can be added to domain-specific needs.

Dynamic number of Output Intents

Since there is no fixed number of intents that the clients ask in a call, providing a hard cut-off of Top “N” intents will not provide desirable output. Hence, the system is designed to automatically provide the dynamic number of intents corresponding to each transcript using a differential cut-off to identify the number of intents that needs to be provided as output.

Detection of Data Drift in Time Series Forecasting

Multiple Time Series Forecast & Demand Pattern Classification using R — Part 2 | by Gouthaman Tharmathasan | Towards Data Science

What is Data Drift?

Changes in the data distribution are monitored with Data Drift, one of the most common indicators when monitoring MLOps models. It is a metric that measures the change in distribution between two data sets. Before diving deeper into it, let us examine how ML Works defines drift for a time series use case and how the different drift components provide valuable insights and recommendations.

In Illustration 1 below, we can see that distributions of the light blue and dark blue samples (training and test data sets, respectively) are different for the same bin definitions of a feature in the model. This difference in the distribution is what drift quantifies as a percentage of shift.

Illustration 1: Distributions of the training (light blue bars) and the test data (dark blue bars).

Data Drift in Time Series Models

Let’s consider a Promotion Effectiveness Model as an example with four variables:

  • Total Promotion Spends
  • Promotion Duration
  • Product’s Base Price
  • Product’s Promoted Price

These variables drive product sales every month, and data drift is measured at the three major aspects of a time series model, i.e.,

  • Feature Drift
  • Target Drift
  • Lag Drift

Feature Drift 

In Feature Drift, each variable in the training data is compared with the new stream of data that the model uses to make the prediction. The importance of each feature’s variables and Feature Drift (ex: Promotion Duration) can give an idea of the data problems you need to address as a part of model degradation.

Note: Feature-level insights are applicable to all types of machine learning model.

Target Drift 

Target Drift plays an important role in further understanding data issues. It measures how predictions in the new data stream have a different distribution than the trained model’s target variable. Therefore, Target drift indicates how extreme the model predictions can be/are compared to the trained data.

Note: If Target Drift exists despite Feature-level Drift, one can assert that model is under-fitted, and the relationship between the Features(X) and Target(Y) is not robust to making predictions, or that the model is over-fitted to outliers, etc. (The reasons are not exhaustive to the assumptions made above).

Therefore, it is recommended to investigate the model training process and increase the quantity/quality of data entering the model (improve correlation, feature transformations, better stratification, etc.)

Lag Drift

In time series models, auto-correlation is likely to affect the final prediction of the model. Hence, to identify a data pattern change in the lag components, Lag Drift (A direct comparison of the training and test data frame of the model’s lag components) was introduced.

Note: If there is no Feature Drift or Target Drift, but there is Lag Drift, retraining the model with a better data sample is recommended for accurate sales prediction.

Some of the metrics elucidated above can help you set up the capability to monitor the health and degradation of production models and determine the data handling/modeling changes required to implement and sustain ML solutions and automation.

MLOps Principles

Illustration 2: Functional Flow of the First Step of Automating the ML Solution.

Based on our many years of consulting experience, we have built an enterprise-grade MLOps product called ML Works to address the problems mentioned above and enable ML solutions to take the first step in the MLOps journey.

With the rise of more and more MLOps platforms, the business world is moving towards an inevitable transformation. Today, big players like Google, Microsoft, and Amazon have begun to monetize this space.

As Anteelo’s next-gen industrialized MLOps, ML Works can reduce your Data Scientists’ efforts and lead your organization towards faster and frugal innovations.

What is commonly overlooked in B2B dynamic pricing solutions?

Dynamic Pricing in B2B eCommerce: Why it Matters

Nowadays, corporate executives recognize that analytics is pivotal for pricing teams to create solutions that enable them to achieve their firm’s pricing objectives.

In the B2B domain, ‘dynamic pricing’ is a critical approach to bring substantial benefits to companies.

  1. It enables them to predict when to raise prices to capture upside or reduce costs to avoid volume losses that eventually speed up their decision-making process.
  2. It considers various variables vital to determining a product’s desired price, such as demand, deal size, customer type, geography, competitors’ product price, product type, and many more.

With the appropriate set of technologies, advanced analytics, agile processes, and problem-solving skills, one can build a powerful dynamic pricing engine. During the design and development phase, the vendor(s) or internal team works closely with the pricing department to understand their objectives and get inputs on pricing solutions. After completion, price recommendations are passed on to the sales representatives. And the way they follow the recommendations determines the solution’s success.

Now, suppose a higher price is recommended for some customers, but the root cause is not explicit. In such a case, the sales representatives may be reluctant to use the recommendation for fear of losing sales.

The effectiveness of dynamic pricing depends on sales representatives

Dynamic Pricing: Benefits, Strategies and Examples : Price2Spy® Blog

Although pricing instructions are available to the sales reps, for them, the dynamic pricing solution is still a black box. Quite rightly, if they do not understand the rationale behind the price fluctuation for specific products/solutions, how will they negotiate with customers?

Many pricing teams overlook this aspect, which impacts the effectiveness of pricing solutions. However, there are multiple ways to get salespeople to accept dynamic pricing. Here’s how:

  1. The team responsible for building new dynamic-pricing processes and tools needs to incorporate the sales team’s knowledge into the system.
  2. Throughout the decision cycle, the sales representatives should be treated as partners, and the sales managers should be involved in the solution building process.
  3. Once the solution is ready, the pricing team and sales managers must explain the rationale behind the new price recommendations.
  4. This way, the salespersons can justify the new price.

All of this requires collaboration and extra time, but it is worth the extra effort.

Collaborative Overload

Besides, sales staff can also feed win and loss information back into the system to steadily improve the model’s accuracy and uncover new insights, thereby making Dynamic pricing self-reinforcing. This kind of involvement boosts their confidence in the solution and makes their experience countable. Moreover, incentive structures also need to be realigned so that sales reps are rewarded for following the recommendations. It means that agents will be compensated based on the recommended results generated by the pricing tool – Analytics can also help design this kind of Incentive Compensation.

A significant impact cannot come only from having a robust solution. The sales reps are equally crucial in enabling the last mile adoption of your dynamic pricing solution.

How to Evaluate a Web Hosting Provider’s Reliability

Reliability in Qualitative Research – HotCubator | Learn| Grow| Catalyse

As any builder will tell you, if you build a house on poor foundations, it will fall. The same principle applies when building a website: here, however, the foundations are not reinforced concrete but the services of your hosting provider. A website its reliant on its hosting for its ongoing performance and success. Here, we’ll look at the criteria you need to look at when measuring the reliability of a web hosting provider.

1. The services you need

How to Start an Agency After Successfully Freelancing

Hosting comes in a variety of forms and websites have their own specific needs. Finding the right host means matching these together. For example, if you run a WordPress website, you may want dedicated WordPress hosting which is designed specifically for optimising the performance and security of WordPress sites.

Look for a provider that offers all forms of hosting: shared, VPS, dedicated servers and cloud and which provides different solutions and packages for each form. You may need Linux or Windows hosting, business or reseller hosting, a choice between cPanel or Plesk control panels, the ability to choose the hardware spec of your dedicated server, want managed services or a bespoke enterprise-level cloud solution. A reliable host will be able to provide the solution which is tailored to your needs.

2. Availability

Security system 'availability' jargon buster

Availability, in web hosting parlance, refers to the amount of time that your website or application stays online. Also known as ‘uptime’, it is of critical importance because a website or app that goes offline loses you money and damages your reputation for reliability.

There are several reasons a website can go offline and not all of these are caused by poor web hosting. However, issues with hardware failure or overcrowded shared servers are within the remit of a web host and if these lead to constant lack of availability, your online venture can suffer as a consequence.

When it comes to availability, look for a web host that guarantees 99.9% uptime (the remaining 0.1% may be needed to update the operating system and patch vulnerabilities). At the same time, you can increase hardware reliability by opting for SSD storage, which uses solid-state rather than mechanical disk drives and is thus less prone to failure.

3. Technical support

What's A Career In Technical Support And Help Desk Like?

While all web hosts will have a customer service department, what is more vital is finding a host that provides 24/7 technical support. There are many issues you can have with running a website and having 24/7 technical support in place means that, regardless of the time of day, an expert will be available to help sort the problem straight away.

Having a support team there for you in a time of need is one of the main reliability measures you should look for. For convenience, look for a support team that can be contacted by live chat, phone or ticket.

4. Money-back guarantee

A Question of Money-back Guarantees and Marketing Your Online Products

People don’t always make the right choice with their hosting, nor are they always satisfied with the hosting services or support they receive. What makes this a bitter pill to swallow is that, should they change their mind, they find themselves out of pocket.

A reliable host is one which puts its customers first and which is so confident in the quality of its service that it offers a money-back guarantee. Here at Anteelo, for example, if you change your mind, you can claim a full refund on all hosting services (except dedicated servers and licensed add-ons) within the first 30 days. Additionally, our Anytime Money Back policy allows you to claim a refund for any unused portion of your contract even after the initial 30 days have passed.

5. Security

What Is IT Security? - Information Technology Security - Cisco

Cyberattacks have become a major threat to website owners with hacking, ransomware, malware, data theft and DDoS attacks leading to the demise of many businesses. One increasingly important measure of a web host’s reliability is how well they protect their customers from cybercrime.

Hosts should have security experts in place and provide them with the latest tools to do their job effectively. These should include the latest firewalls, intrusion detection and prevention tools and anti-spam email filters. They should also provide security services such as remote, encrypted backups, SSL certificates and email signing certificates.

6. Scalability    

4 Tips for Building a More Scalable Business

Your hosting needs may change over time. If your website grows and gets lots more traffic, the hosting package you currently use might not offer sufficient storage, bandwidth, RAM and CPU resources to cope with your needs.

A measure of reliability is how quick and easy a host makes it to scale up to a bigger package or even to a different form of hosting solution (e.g. VPS, dedicated server or cloud). If this involves migrating to a different server, they’ll also help you with the process of migration so that the move is as seamless and undisruptive as possible. At the same time, scaling back to a smaller solution should be just as easy.

7. Price

NDIS prices – the good, the bad and the ugly - Every Australian Counts

Every business will consider price when it looks to acquire the services of a web host. However, it is important to compare like with like rather than going for the cheapest deal. While affordability is important, so too is price stability and cost transparency.

A reliable host is one which makes its pricing completely clear and where there are no hidden extras. Additionally, although prices do change from time to time, a good host will try to keep increases to a minimum and implement them as infrequently as possible.

8. What others think

What the others think of you reflects what they are, not who you are

While web hosts can provide details about their products and services on their website, a true measure of their reliability is what their customers say about them. Online reviews and star ratings are highly valuable resources for owners looking for the best hosting solutions as you get the customer’s judgement on the quality of their services.

Conclusion

With a reliable web host supporting your online venture, you have the right solution for your budget and hosting needs, the flexibility to grow, the security and uptime guarantees to keep you online, the support in place to take care of issues and the backing of other customers to help you make the right decision. And if you’re still not happy, you can rest assured that you’ll get your money back.

What’s New in the NIST Cybersecurity Framework 1.1

NIST releases Cybersecurity Framework 1.1 - Help Net Security

It’s been a long time coming. The U.S. Commerce Department’s National Institute of Standards and Technology (NIST) recently released version 1.1 of the Framework for Improving Critical Infrastructure Cybersecurity, or affectionatey called the Cybersecurity Framework.The initial framework was created to help organizations that operate critical infrastructure better secure their digital assets. These industries include energy, banking, communications and the defense industrial base. However, organizations outside of the critical infrastructure industries have turned to the Cybersecurity Framework for guidance when it comes to securing their systems and data.

Version 1.1, the first update since February 2014, includes updates to authentication and identity, self-assessing cybersecurity risk, managing cybersecurity within the supply chain, and vulnerability disclosure.

 

MQTT and the NIST Cybersecurity Framework Version 1.0

The changes, according to NIST, are based on feedback collected through public calls for comments, questions received by team members, and workshops held in 2016 and 2017. Two drafts of Version 1.1 were circulated for public comment to help NIST  comprehensively address all of these inputs.

“The release of the Cybersecurity Framework Version 1.1 is a significant advance that truly reflects the success of the public-private model for addressing cybersecurity challenges,” said Walter G. Copan, Under Secretary of Commerce for Standards and Technology and NIST Director. “From the very beginning, the Cybersecurity Framework has been a collaborative effort involving stakeholders from government, industry and academia. The impact of their work is evident in the widespread adoption of the framework by organizations across the United States, as well as internationally.”

Matt Barrett, program manager for the Cybersecurity Framework, said “this update refines, clarifies and enhances Version 1.0. It is still flexible to meet an individual organization’s business or mission needs, and applies to a wide range of technology environments, such as information technology, industrial control systems and the Internet of Things.”

The framework update process is now published on the Cybersecurity Framework website. Later this year NIST plans to release an updated companion document, the Roadmap for Improving Critical Infrastructure Cybersecurity, which will describe key areas of development, alignment and collaboration.

“Engagement and collaboration will continue to be essential to the framework’s success,” said Barrett. “The Cybersecurity Framework will need to evolve as threats, technologies and industries evolve. With this update, we’ve demonstrated that we have a good process in place for bringing stakeholders together to ensure the framework remains a great tool for managing cybersecurity risk.”

Cybersecurity is critical for national and economic security,” said Secretary of Commerce Wilbur Ross. “The voluntary NIST Cybersecurity Framework should be every company’s first line of defense. Adopting version 1.1 is a must do for all CEO’s.”

error: Content is protected !!