Accelerating Out of Crisis with Digital Transformation

Gouri Mahendru

Undoubtedly, business over the past few months has been unlike ever before. With the COVID-19 pandemic shutting physical stores, restaurants, and offices the world over, organizations of all shapes and sizes were compelled to move their businesses online and we were reminded of the power of technology as an enabler of success.

As businesses look beyond the immediate impacts of COVID-19, it’s time to adopt a connected enterprise mindset and accelerate digital transformation.

The New Customer Experience

When lockdown hit, customer service teams on the frontline found themselves at the centre of a perfect storm. Dealing with both the instant switch to remote working and staffing shortages due to the pandemic, they had to manage a huge influx in phone and email enquiries from customers struggling to keep their cool as they tried to rearrange cancelled bookings and secure refunds.

From market research, we know there is still a significant disconnect between the service businesses believe they are delivering, and the service customers believe they are getting. While the situation remains precarious, now is the time to focus on delivering fast, transparent, and verified support. Putting the right technology at the heart of this transformation will be key to success.

Reimagining CX in the wake of the pandemic

Many businesses shifted their operations online to continue selling safely through the pandemic, with worldwide spending on digital transformation technologies and services expected to rise by over 10% in 2020.

This has opened a vast array of new communication platforms on which organizations can engage with their customers. But businesses must ensure they are strengthening the bridge between their different channels to ensure a consistent customer experience. Omnichannel strategies have become a necessity, as companies find new ways to interact with their customer base on the channels they are using the most. Taking support to their customers rather than bringing them to support is critical.

Here are some useful tips for reinventing your customer experience with a supercharged omnichannel approach.

1.     No two customers are the same

People like to be treated as individuals and want to raise issues in the environment that they’re most comfortable in. It’s no good for businesses to invest heavily in one channel at the risk of another, as they could end up isolating a big customer segment.

Being able to support customers through email, phone, and chat services in a single, streamlined solution can help businesses deliver a better overall experience. The last thing customers want to do is repeat themselves when they switch between a chatbot interaction, text, email, or phone exchange. Offering a seamless experience means a customer’s query is logged once and shared across all communication channels, reducing the likelihood of them becoming dissatisfied with the service they are receiving.

2.     Look inward, as well as outward

It’s not just your customer-facing technology that you should consider, you also need to think about the internal systems that can help improve your target market’s perception of the company. Taking an omnichannel approach to customer communication provides multiple platforms to collect customer data. With more data, you can build a better picture of the average customer journey – from awareness and consideration to purchase – and deliver a better experience for each of them.

By offering your customers multiple touchpoints to interact with your brand, they can get everything they need from a single source of truth, without having to switch between the channels.

3.     Tweak and optimize campaigns as necessary

To succeed in hitting the right tone, keeping existing customers, and attracting new ones, you should understand exactly which marketing campaigns are resonating, and which aren’t. The results right now are likely to be very different to ‘business as usual’ – so the approach taken needs to be tailored to each customer accordingly.

Surveys of sales leaders during COVID-19 found that 62% have directed their teams to spend more time in their CRM system, looking at what insights they can glean from it. The CRM system is a powerful tool for collecting data and learning more about each customer, with the goal of delivering a better experience and building trust between buyer and seller.

Whatever systems you deploy, it’s important to be mindful of how your customers want to interact with you, not the other way around. As customers look to support the businesses that are looking after them the most, offering a consistent experience across your channels is key to securing loyal customers and repeat business.

Smarter CX starts with AI

There is a growing AI revolution taking place in customer service centers. Our own research found that a quarter of businesses want to use AI to improve their customers’ experience of their brand. This is hugely encouraging for the industry, but organizations shouldn’t invest in AI just for the sake of it. They need to find areas in which its use will see the most value.

For example, over a quarter (27%) said that their biggest frustration when dealing with customer service agents was being left on hold for too long. This issue has been exacerbated further by the huge volume of enquiries customer support teams now find themselves facing, with some customers waiting hours before getting through. AI-powered chatbots can remove some of this backlog by automating simple questions and routing customer chats that require urgent attention through to human service agents.

We know that consumers prize human interaction, especially during a time when it is so limited. For this reason, AI should only be brought into augment, not replace human customer service agents. In doing so, businesses can develop AIs that mimic the behaviour of their best agents, while freeing up their time to focus on trickier cases. This will ultimately lead to more positive outcomes, better all-round customer experiences, greater brand loyalty, and increased long-term value.

About the Author –

Gouri is part of the Quality Management function at GAVS, handling the Operations and Delivery excellence within ZIF Command Centres. She is passionate about driving business excellence through innovative IT Service Management in the Digital era and always looks for ways to deliver business value.
When she’s not playing with data and pivoting tables, she spends her time cooking, watching dramas and thrillers, and exploring places in and around the city.

Patient Segmentation Using Data Mining Techniques

Srinivasan Sundararajan

Srinivasan Sundararajan

Patient Segmentation & Quality Patient Care

As the need for quality and cost-effective patient care increases, healthcare providers are increasingly focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. Simply put, data-driven healthcare is augmenting the human intelligence based on experience and knowledge.

Segmentation is the standard technique used in Retail, Banking, Manufacturing, and other industries that needs to understand their customers to provide better customer service. Customer segmentation defines the behavioral and descriptive profiles of customers. These profiles are then used to provide personalized marketing programs and strategies for each group.

In a way, patients are like customers to healthcare providers. Though the element of quality of care takes precedence than profit-making intention, a similar segmentation of patients will immensely benefit the healthcare providers, mainly for the following reasons:

  • Customizing the patient care based on their behavior profiles
  • Enabling a stronger patient engagement
  • Providing the backbone for data-driven decisions on patient profile
  • Performing advanced medical research like launching a new vaccine or trial

The benefits are obvious and individual hospitals may add more points to the above list; the rest of the article is about how to perform the patient segmentation using data mining techniques.

Data Mining for Patient Segmentation

In Data Mining a, segmentation or clustering algorithm will iterate over cases in a dataset to group them into clusters that contain similar characteristics. These groupings are useful for exploring data, identifying anomalies in the data, and creating predictions. Clustering is an unsupervised data mining (machine learning) technique used for grouping the data elements without advance knowledge of the group definitions.

K-means clustering is a well-known method of assigning cluster membership by minimizing the differences among items in a cluster while maximizing the distance between clusters. Clustering algorithm first identifies relationships in a dataset and generates a series of clusters based on those relationships. A scatter plot is a useful way to visually represent how the algorithm groups data, as shown in the following diagram. The scatter plot represents all the cases in the dataset, and each case is a point on the graph. The cluster points on the graph illustrate the relationships that the algorithm identifies.

AIOps Artificial Intelligence for IT Operations

One of the important parameters for a K-Means algorithm is the number of clusters or the cluster count. We need to set this to a value that is meaningful to the business problem that needs to be solved. However, there is good support in the algorithm to find the optimal number of clusters for a given data set, as explained next.

To determine the number of clusters for the algorithm to use, we can use a plot of the within cluster’s sum of squares, by the number of clusters extracted. The appropriate number of clusters to use is at the bend or ‘elbow’ of the plot. The Elbow Method is one of the most popular methods to determine this optimal value of k i.e. the number of clusters. The following code creates a curve.

AIOps Digital Transformation Solutions
AI Devops Automation Service Tools

In this example, based on the graph, it looks like k = 4 would be a good value to try.

Reference Patient Segmentation Using K-Means Algorithm in GAVS Rhodium Platform

In GAVS Rhodium Platform, which helps healthcare providers with Patient Data Management and Patient Data Sharing, there is a reference implementation of Patient Segmentation using K-Means algorithm. The following are the attributes that are used based on a publicly available Patient admit data (no personal information used in this data set). Again in the reference implementation sample attributes are used and in a real scenario consulting with healthcare practitioners will help to identify the correct attributes that is used for clustering.

 To prepare the data for clustering patients, patients must be separated along the following dimensions:

  • HbA1c: Measuring the glycated form of hemoglobin to obtain the three-month average of blood sugar.
  • Triglycerides: Triglycerides are the main constituents of natural fats and oils. This test indicates the amount of fat or lipid found in the blood.
  • FBG: Fasting Plasma Glucose test measures the amount of glucose levels present in the blood.
  • Systolic: Blood Pressure is the pressure of circulating blood against the walls of Blood Vessels. This test relates to the phase of the heartbeat when the heart muscle contracts and pumps blood from the chambers into the arteries.
  • Diastolic: The diastolic reading is the pressure in the arteries when the heart rests between beats.
  • Insulin: Insulin is a hormone that helps move blood sugar, known as glucose, from your bloodstream into your cells. This test measures the amount of insulin in your blood.
  • HDL-C: Cholesterol is a fat-like substance that the body uses as a building block to produce hormones. HDL-C or good cholesterol consists primarily of protein with a small amount of cholesterol. It is considered to be beneficial because it removes excess cholesterol from tissues and carries it to the liver for disposal. The test for HDL cholesterol measures the amount of HDL-C in blood.
  • LDL-C: LDL-C or bad cholesterol present in the blood as low-density lipoprotein, a relatively high proportion of which is associated with a higher risk of coronary heart disease. This test measures the LDL-C present in the blood.
  • Weight: This test indicates the heaviness of the patient.

The above tests are taken for the patients during the admission process.

The following is the code snippet behind the scenes which create the patient clustering.

Best AIOps Platforms Software

The below is the output cluster created from the above algorithm.

Just from this sample, healthcare providers can infer the patient behavior and patterns based on their creatinine and glucose levels, in real-life situations other different attributes can be used.

AI will play a major role in future healthcare data management and decision making and data mining algorithms like K-Means provide an option to segment the patients based on the attributes which will improve the quality of patient care.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoprability), while bringing more trust and transparency into the healthcare process using patient consent management, credentialing and zero knowledge proofs.

Customer Focus Realignment in a Pandemic Economy

Ashish Joseph

Business Environment Overview

The Pandemic Economy has created an environment that has tested businesses to either adapt or perish. The atmosphere has become a quest for the survival of the fittest. On the brighter side, organizations have stepped up and adapted to the crisis in a way that they have worked faster and better than ever before. 

During this crisis, companies have been strategic in understanding their focus areas and where to concentrate on the most. From a high-level perspective, we can see that businesses have focused on recovering the sources of their revenues, rebuilding operations, restructuring the organization, and accelerating their digital transformation initiatives. In a way, the pandemic has forced companies to optimize their strategies and harness their core competencies in a hyper-competitive and survival environment.

Need for Customer Focused Strategies

A pivotal and integral strategy to maintain and sustain growth is for businesses to avoid the churn of their existing customers and ensure the quality of delivery can build their trust for future collaborations and referrals. Many organizations, including GAVS, have understood that Customer Experience and Customer Success is consequential for customer retention and brand affinity. 

Businesses should realign themselves in the way they look at sales funnels. A large portion of the annual budget is usually allocated towards the top of the funnel activities to acquire more customers. But companies with customer success engraved in their souls, believe in the ideology that the bottom of the funnel feeds the top of the funnel. This strategy results in a self-sustaining and recurring revenue model for the business.

An independent survey conducted by the Customer Service Managers and Professionals Journal has found that companies pay 6x times more to acquire new customers than to keep an existing one. In this pandemic economy, the costs for customer acquisition will be much higher than before as organizations must be very frivolous in their spending. The best step forward is to make sure the companies strive for excellence in their customer experience and deliver measurable value to them. A study conducted by Bain and Company titled “Prescription for Cutting Costs” talks about how increasing customer retention by 5% increases profits from 25%-95%. 

The path to a sustainable and high growth business is to adopt customer-centric strategies that yield more value and growth for its customers. Enhancing customer experience should be prime and proper governance must be in place to monitor and gauge strategies. Governance in the world of the customer experience must revolve around identifying and managing resources needed to drive sustained actions, establishing robust procedures to organize processes, and ensuring a framework for stellar delivery.

Scaling to ever-changing customer needs

A research body called Walker Information conducted an independent research on B2B companies focusing on key initiatives that drive customer experiences and future growth. The study included various customer experience leaders, senior executives, and influencers representing a diverse set of business models in the industry. They published the report titled “Customer 2020: A Progress Report” and the following are strategies that best meet the changing needs of customers in the B2B landscape.

AI Devops Automation Service Tools

Over 45% of the leaders highlighted the importance of developing a customer-centric culture that simplifies products and processes for the business. Now the question that we need to ask ourselves is, how do we as an organization scale up to these demands of the market? I strongly believe that each of us, in the different roles we play in the organization, has an impact.

The Executive Team can support more customer experience strategies, formulate success metrics, measure the impact of customer success initiatives, and ensure alignment with respect to the corporate strategy.

The Client Partners can ensure that they represent the voice of the customer, plot a feasible customer experience roadmap, be on point with customer intelligence data, and ensure transparency and communication with the teams and the customers. 

The cross-functional team managers and members can own and execute process improvements, personalize and customize customer journeys, and monitor key delivery metrics.

When all these members work in unison, the target goal of delivery excellence coupled with customer success is always achievable.

Going Above and Beyond

Organizations should aim for customers who can be retained for life. The retention depends upon how much a business is willing to go the extra mile to add measurable value to its customers. Business contracts should evolve into partnerships that collaborate on their competitive advantages that bring solutions to real-world business problems. 

As customer success champions, we should reevaluate the possibilities in which we can make a difference for our customers. By focusing on our core competencies and using the latest tools in the market, we can look for avenues that can bring effort savings, productivity enhancements, process improvements, workflow optimizations, and business transformations that change the way our customers do business. 

After all, We are GAVS. We aim to galvanize a sense of measurable success through our committed teams and innovative solutions. We should always stride towards delivery excellence and strive for customer success in everything we do.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Reduce Test Times and Increase Coverage with AI & ML

Kevin Surace

Chairman & CTO, Appvance.ai

With the need for frequent builds—often many times in a day—QEs can only keep pace through AI-led testing. It is the modern approach that allows quality engineers to create scripts and run tests autonomously to find bugs and provide diagnostic data to get to the root cause.

AI-driven testing means different things to different QA engineers. Some see it as using AI for identifying objects or helping create script-less testing; some consider it as autonomous generation of scripts while others would think in terms of leveraging system data to create scripts which mimic real user activity.

Our research shows that teams who are able to implement what they can in scripts and manual testing have, on average, less than 15% code, page, action, and likely user flow coverage. In essence, even if you have 100% code coverage, you are likely testing less than 15% of what users will do. That in itself is a serious issue.

Starting in 2012, Appvance set out to rethink the concept of QA automation. Today our AIQ Technology combines tens of thousands of hours of test automation machine learning with the deep domain knowledge, the essential business rules, each QE specialist knows about their application. We create an autonomous expert system that spawns multiple instances of itself that swarm over the application testing at the UX and at the API-levels. Along the way these Intelligences write the scripts, hundreds, and thousands of them, that describes their individual journeys through the application.

And why would we need to generate so many tests fully autonomously. Because applications today are 10X the size they were just ten years ago. But your QE team doesn’t have 10X the number of test automation engineers. And because you have 10X less time to do the work than 10 years ago. Just to keep pace with the dev team requires each quality engineer to be 100X more productive than they were 10 years ago.

Something had to change; that something is AI.

AI-testing in two steps

We leveraged AI and witnessed over 90% reduction in human effort to find the same bugs. So how does this work?

It’s really a two-stage process.

First, leveraging key AI capabilities in TestDesigner, Appvance’s codeless test creation system, we make it possible to write scripts faster, identify more resilient accessors, and substantially reduce maintenance of scripts.

With AI alongside you as you implement an automated test case, you get a technology that suggests the most stable accessors and constantly improves and refines them. It also creates “fallback accessors” when tests run and hit an accessor change enabling the script to continue even though changes have been made to the application. And finally, the AI can self-heal scripts which must and update them with new accessors without human assistance. These AI-based, built-in technologies give you the most stable scripts every time with the most robust accessor methodologies and self-healing. Nothing else comes close.

The final two points above deal with autonomous generation of tests. To beat the queue and crush it, you have to get a heavy lift for finding bugs. And as we have learnt, go far beyond the use cases that a business analyst listed. Job one is to find bugs and prioritize them, leveraging AI to generate tests autonomously.

Appvance’s patented AI engine has already been trained with millions of actions. You will teach it the business rules of your application (machine learning). It will then create real user flows, take every possible action, discover every page, fill out every form, get to every state, and validate the most critical outcomes just as you trained it to do. It does all this without writing or recording a single script. We call this is ‘blueprinting’ an application. We do this at every new build. Multiple instances of the AI will spin up, each selecting a unique path through the application, typically finding 1000s or more flows in a matter of minutes. When complete, the AI hands you the results including bugs, all the diagnostic data to help find the root cause, and the reusable test-scripts to repeat the bug. A further turn of the crank can refine these scripts into exact replicas of what production users are doing and apply them to the new build. Any modern approach to continuous testing needs to leverage AI in both helping QA engineers create scripts as well as autonomously create tests so that both parts work together to find bugs and provide data to get to the root cause. That AI driven future is available today from Appvance.

About the Author –

Kevin Surace is a highly lauded entrepreneur and innovator. He’s been awarded 93 worldwide patents, and was Inc. Magazine Entrepreneur of the Year, CNBC Innovator of the Decade, a Davos World Economic Forum Tech Pioneer, and inducted into the RIT Innovation Hall of Fame. Kevin has held leadership roles with Serious Energy, Perfect Commerce, CommerceNet and General Magic and is credited with pioneering work on AI virtual assistants, smartphones, QuietRock and the Empire State Building windows energy retrofit.

Artificial Intelligence in Healthcare

Dr. Ramjan Shaik

Scientific progress is about many small advancements and occasional big leaps. Medicine is no exception. In a time of rapid healthcare transformation, health organizations must quickly adapt to evolving technologies, regulations, and consumer demands. Since the inception of electronic health record (EHR) systems, volumes of patient data have been collected, creating an atmosphere suitable for translating data into actionable intelligence. The growing field of artificial intelligence (AI) has created new technology that can handle large data sets, solving complex problems that previously required human intelligence. AI integrates these data sources to develop new insights on individual health and public health.

Highly valuable information can sometimes get lost amongst trillions of data points, costing the industry around $100 billion a year. Providers must ensure that patient privacy is protected, and consider ways to find a balance between costs and potential benefits. The continued emphasis on cost, quality, and care outcomes will perpetuate the advancement of AI technology to realize additional adoption and value across healthcare. Although most organizations utilize structured data for analysis, valuable patient information is often “trapped” in an unstructured format. This type of data includes physician and patient notes, e-mails, and audio voice dictations. Unstructured data is frequently richer and more multifaceted. It may be more difficult to navigate, but unstructured data can lead to a plethora of new insights. Using AI to convert unstructured data to structured data enables healthcare providers to leverage automation and technology to enhance processes, reduce the staff required to monitor patients while filling gaps in healthcare labor shortages, lower operational costs, improve patient care, and monitor the AI system for challenges.

AI is playing a significant role in medical imaging and clinical practice. Providers and healthcare organizations have recognized the importance of AI and are tapping into intelligence tools. Growth in the AI health market is expected to reach $6.6 billion by 2021 and to exceed $10 billion by 2024.  AI offers the industry incredible potential to learn from past encounters and make better decisions in the future. Algorithms could standardize tests, prescriptions, and even procedures across the healthcare system, being kept up-to-date with the latest guidelines in the same way a phone’s operating system updates itself from time to time.

There are three main areas where AI efforts are being invested in the healthcare sector.

  • Engagement – This involves improvising on how patients interact with healthcare providers and systems.
  • Digitization – AI and other digital tools are expected to make operations more seamless and cost-effective.
  • Diagnostics – By using products and services that use AI algorithms diagnosis and patient care can be improved.

AI will be most beneficial in three other areas namely physician’s clinical judgment and diagnosis, AI-assisted robotic surgery, and virtual nursing assistants.

Following are some of the scenarios where AI makes a significant impact in healthcare:

  • AI can be utilized to provide personalized and interactive healthcare, including anytime face-to-face appointments with doctors. AI-powered chatbots can be powered with technology to review the patient symptoms and recommend whether a virtual consultation or a face-to-face visit with a healthcare professional is necessary.
  • AI can enhance the efficiency of hospitals and clinics in managing patient data, clinical history, and payment information by using predictive analytics. Hospitals are using AI to gather information on trillions of administrative and health record data points to streamline the patient experience. This collaboration of AI and data helps hospitals/clinics to personalize healthcare plans on an individual basis.
  • A taskforce augmented with artificial intelligence can quickly prioritize hospital activity for the benefit of all patients. Such projects can improve hospital admission and discharge procedures, bringing about enhanced patient experience.
  • Companies can use algorithms to scrutinize huge clinical and molecular data to personalize healthcare treatments by developing AI tools that collect and analyze data from genetic sequencing to image recognition empowering physicians in improved patient care. AI-powered image analysis helps in connecting data points that support cancer discovery and treatment.
  • Big data and artificial intelligence can be used in combination to predict clinical, financial, and operational risks by taking data from all the existing sources. AI analyzes data throughout a healthcare system to mine, automate, and predict processes. It can be used to predict ICU transfers, improve clinical workflows, and even pinpoint a patient’s risk of hospital-acquired infections. Using artificial intelligence to mine health data, hospitals can predict and detect sepsis, which ultimately reduces death rates.
  • AI helps healthcare professionals harness their data to optimize hospital efficiency, better engage with patients, and improve treatment. AI can notify doctors when a patient’s health deteriorates and can even help in the diagnosis of ailments by combing its massive dataset for comparable symptoms. By collecting symptoms of a patient and inputting them into the AI platform, doctors can diagnose quickly and more effectively.   
  • Robot-assisted surgeries ranging from minimally-invasive procedures to open-heart surgeries enables doctors to perform procedures with precision, flexibility, and control that goes beyond human capabilities, leading to fewer surgery-related complications, less pain, and a quicker recovery time. Robots can be developed to improve endoscopies by employing the latest AI techniques which helps doctors get a clearer view of a patient’s illness from both a physical and data perspective.

Having understood the advancements of AI in various facets of healthcare, it is to be realized that AI is not yet ready to fully interpret a patient’s nuanced response to a question, nor is it ready to replace examining patients – but it is efficient in making differential diagnoses from clinical results. It is to be understood very clearly that the role of AI in healthcare is to supplement and enhance human judgment, not to replace physicians and staff.

We at GAVS Technologies are fully equipped with cutting edge AI technology, skills, facilities, and manpower to make a difference in healthcare.

Following are the ongoing and in-pipeline projects that we are working on in healthcare:

ONGOING PROJECT:

AI Devops Automation Service Tools

PROJECTS IN PIPELINE:

AIOps Artificial Intelligence for IT Operations
AIOps Digital Transformation Solutions
Best AI Auto Discovery Tools
Best AIOps Platforms Software

Following are the projects that are being planned:

  • Controlling Alcohol Abuse
  • Management of Opioid Addiction
  • Pharmacy Support – drug monitoring and interactions
  • Reducing medication errors in hospitals
  • Patient Risk Scorecard
  • Patient Wellness – Chronic Disease management and monitoring

In conclusion, it is evident that the Advent of AI in the healthcare domain has shown a tremendous impact on patient treatment and care. For more information on how our AI-led solutions and services can help your healthcare enterprise, please reach out to us here.

About the Author –

Dr. Ramjan is a Data Analyst at GAVS. He has a Doctorate degree in the field of Pharmacy. He is passionate about drawing insights out of raw data and considers himself to be a ‘Data Person’.

He loves what he does and tries to make the most of his work. He is always learning something new from programming, data analytics, data visualization to ML, AI, and more.

Center of Excellence – Big Data

The Big Data CoE is a team of experts that experiments and builds various cutting-edge solutions by leveraging the latest technologies, like Hadoop, Spark, Tensor-flow, and emerging open-source technologies, to deliver robust business results. A CoE is where organizations identify new technologies, learn new skills, and develop appropriate processes that are then deployed into the business to accelerate adoption.

Leveraging data to drive competitive advantage has shifted from being an option to a requirement for hyper competitive business landscape. One of the main objectives of the CoE is deciding on the right strategy for the organization to become data-driven and benefit from a world of Big Data, Analytics, Machine Learning and the Internet of Things (IoT).

Cloud Migration Assessment Tool for Business
Triple Constraints of Projects

“According to Chaos Report, 52% of the projects are either delivered late or run over the allocated. The average across all companies is 189% of the original cost estimate. The average cost overrun is 178% for large companies, 182% for medium companies, and 214% for small companies. The average overrun is 222% of the original time estimate. For large companies, the average is 230%; for medium companies, the average is 202%; and for small companies, the average is 239%.”

Big Data CoE plays a vital role in bringing down the cost and reducing the response time to ensure project is delivered on time by helping the organization to build the skillful resources.

Big Data’s Role

Helping the organization to build quality big data applications on their own by maximizing their ability to leverage data. Data engineers are committed to helping ensure the data:

  • define your strategic data assets and data audience
  • gather the required data and put in place new collection methods
  • get the most from predictive analytics and machine learning
  • have the right technology, data infrastructure, and key data competencies
  • ensure you have an effective security and governance system in place to avoid huge financial, legal, and reputational problems.
Cyber Security and Compliance Services

Data Analytics Stages

Architecture optimized building blocks covering all data analytics stages: data acquisition from a data source, preprocessing, transformation, data mining, modeling, validation, and decision making.

Cyber Security Mdr Services

Focus areas

Algorithms support the following computation modes:

  • Batch processing
  • Online processing
  • Distributed processing
  • Stream processing

The Big Data analytics lifecycle can be divided into the following nine stages:

  • Business Case Evaluation
  • Data Identification
  • Data Acquisition & Filtering
  • Data Extraction
  • Data Validation & Cleansing
  • Data Aggregation & Representation
  • Data Analysis
  • Data Visualization
  • Utilization of Analysis Results

A key focus of Big-data CoE is to establish a data-driven organization by developing proof of concept with the latest technologies with Big Data and Machine learning models. As of part of CoE initiatives, we are involved in developing the AI widgets to various market places, such as Azure, AWS, Magento and others. We are also actively involved in engaging and motivating the team to learn cutting edge technologies and tools like Apache Spark and Scala. We encourage the team to approach each problem in a pragmatic way by making them understand the latest architectural patterns over the traditional MVC methods.

It has been established that business-critical decisions supported by data-driven insights have been more successful. We aim to take our organization forward by unleashing the true potential of data!

If you have any questions about the CoE, you may reach out to them at SME_BIGDATA@gavstech.com

CoE Team Members

  • Abdul Fayaz
  • Adithyan CR
  • Aditya Narayan Patra
  • Ajay Viswanath V
  • Balakrishnan M
  • Bargunan Somasundaram
  • Bavya V
  • Bipin V
  • Champa N
  • Dharmeswaran P
  • Diamond Das
  • Inthazamuddin K
  • Kadhambari Manoharan
  • Kalpana Ashokan
  • Karthikeyan K
  • Mahaboobhee Mohamedfarook
  • Manju Vellaichamy
  • Manojkumar Rajendran
  • Masthan Rao Yenikapati
  • Nagarajan A
  • Neelagandan K
  • Nithil Raj Tharammal Paramb
  • Radhika M
  • Ramesh Jayachandar
  • Ramesh Natarajan
  • Ruban Salamon
  • Senthil Amarnath
  • T Mohammed Anas Aadil
  • Thulasi Ram G
  • Vijay Anand Shanmughadass
  • Vimalraj Subash

Center of Excellence – .Net

Best Cyber Security Services Companies

“Maximizing the quality, efficiency, and reusability by providing innovative technical solutions, creating intellectual capital, inculcating best practices and processes to instill greater trust and provide incremental value to the Stakeholders.”

With the above mission,we have embarked on our journey to establish and strengthen the .NET Center of excellence (CoE).

“The only way to do great work is to love what you do.” – Steve Jobs

Expertise in this CoE is drawn from top talent across all customer engagements within GAVS. Team engagement is maintained at a very high level with various connects such as regular technology sessions, advanced trainings for CoE members from MS, support and guidance for becoming a MS MVP. Members also socialize new trending articles, tools, whitepapers and blogs within the CoE team and MS Teams channels setup for collaboration. All communications from MS Premier Communications sent to Gold Partners is also shared within the group. The high-level roadmap as planned for this group is laid out below.

Best DCaas Providers in USA
<!–td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}–>
Best DCaas Providers in USA<!–td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}–>
Best DCaas Providers in USA

The .NET CoEfocused on assistingourcustomers in every stage of theengagement right from on-boarding, planning, execution, technical implementation and finally all the way to launching and growing. Our prescriptive approach is to leverage industry-proven best practices, solutions, reusable components and include robust resources, training, and making a vibrant partner community.

With the above as the primary goal in mind the CoE group is currently engaged inor planning the following initiatives.

Technology Maturity Assessment

One of the main objectivesof this group is to provide constant feedback to all .NET stack project for improvement and improvisation. The goal for this initiative is to build the technology maturity index for all projects for the below parameters.

Best Virtual Desktop Infrastructure Software

Using those approaches within a short span of time we were able to make a significant impact for some of our engagements.

Client – Online Chain Store: Identified cheaper cloud hosting option for application UI.

Benefits: Huge cost and time savings.

Client – Health care sector: Provided alternate solution for DB migrations from DEV to various environments.

Benefits: Huge cost savings due to licensing annually.

Competency Building

“Anyone who stops learning is old, whether at twenty or eighty.” – Henry Ford

Continuous learning and upskilling are the new norms in today’s fast changing technology landscape. This initiative is focused on providing learning and upskilling support to all technology teams in GAVS. Identifying code mentors, supporting team members to become full stack developers are some of the activities planned under this initiative.  Working along with the Learning & Development team,the .NET CoE isformulating different training tracks to upskill the team members and provide support for external assessments and MS certifications.

Solution Accelerators

“Good, better, best. Never let it rest. ‘Till your good is better and your better is best.” – St. Jerome

The primary determinants of CoE effectiveness are involvement in solutions and accelerators and in maintaining standard practices of the relevant technologies across customer engagements across the organization.

As part of this initiative we are focusing on building project templates, DevOps pipelines and automated testing templates for different technology stacks for both Serverless and Server Hosted scenarios. We also are planning similar activities for the Desktop/Mobile Stack with the Multi-Platform App UI (MAUI) framework which is planned to be released for Preview in Q4 2020.

Blockchain Solution and Services

Additionally, we are also adoptingless-code, no-code development platforms for accelerated development cycles for specific use-cases.

As we progress on our journey to strengthen the .NET CoE, we want to act as acatalyst in rapid and early adoption of new technology solutions and work as trusted partners with all our customer and stakeholders.

If you have any questions about the CoE, you may reach out to them at COE_DOTNET@gavstech.com

CoE Team Members

  • Bismillakhan Mohammed
  • Gokul Bose
  • Kirubakaran Girijanandan
  • Neeraj Kumar
  • Prasad D
  • Ramakrishnan S
  • SaphalMalol
  • Saravanan Swaminathan
  • SenthilkumarKamayaswami
  • Sethuraman Varadhan
  • Srinivasan Radhakrishnan
  • Thaufeeq Ahmed
  • Thomas T
  • Vijay Mahalingam

Center of Excellence – Database

Data Center as a Service Providers in USA

“During the World War II, there was a time when the Germans winning on every front and the fear of Hitler taking over the world was looming. At that point in time, had the Allies not taken drastic measures and invested in ground-breaking technologies such as radars, aircraft, atomic energy, etc., the world would have been starkly different from what it is today.

Even in today’s world, the pace at which things are changing is incredible. The evolution of technology is unstoppable, and companies must be ready. There is an inherent need for them to differentiate themselves by providing solutions that showcase a deep understanding of domain and technology to address evolving customer expectations. What becomes extremely important for companies is to establish themselves as incubators of innovation and possess the ability to constantly innovate and fail fast. Centers of Excellence can be an effective solution to address these challenges.

“An Organisation’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage”

  • Jack Welch, former Chairman and CEO of General Electric

The Database CoE was formed with a mission to groom, enhance and incubate talents within GAVS to stay abreast of the evolving technology landscape and help our customers with cutting edge technology solutions.

We identify the expert and the requirements across all customer engagements within GAVS. Regular connects and technology sessions ensure everyone in the CoE is learning at least one new topic in a week. Below is our charter and roadmap by priority:

Data Center Consolidation Initiative Services

Data Center Migration Planning Tools

Database CoE is focused on assisting our customers in every stage of the engagement right from on-boarding, planning, execution with consultative approach and a futuristic mindset. With above primary goals we are currently working on below initiatives:

Competency Building

When we help each other and stand together we evolve to be the strongest.

Continuous learning is an imperative in the current times. Our fast-paced trainings on project teams is an alternate to the primitive classroom sessions. We believe true learning happen when you are working on it hands-on. With this key aspect in mind, we divide the teams in smaller groups and map them to projects to get larger exposure and gain from experience.

This started off with a pilot with an ISP provider where we trained 4 CoE members in Azure and Power BI within a span of 2 months.

Desktop-as-a-Service (DaaS) Solution

Database Maturity Assessment

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly “

  • George Westerman, research scientist at the MIT Center for Digital Business

Why Bother with a Database Assessment?

We often know we have a problem and can visualize the ideal state we want our technology solution to get us to.  However, it is challenging to figure out how to get there because it’s easy to confuse the symptoms with the cause of a problem. Thus, you end up solving the ‘symptom’ with a (potentially expensive) piece of technology that is ill-equipped to address the underlying cause.

We offer a structured process to assess your current database estate and select a technology solution helps you get around this problem, reduce risks and fast track the path to your true objective with futureproofing, by forcing you to both identify the right problem and solve it the right way.

Assessment Framework

Digital Service Desk AI Software

Below are the three key drivers powering the assessment.

Accelerated Assessment:

  • Automated assessment and benchmark of existing and new database estates against industry best practices and standards.
  • Analyze & Finetune
    • Analyze assessment findings and implement recommendations on performance, consistency, and security aspect
  • NOC+ZERO TOUCH L2
    • Shift Left and Automate L1/L2 Service requests and incidents with help of Database COE- Automation experts

As we progress on our journey, we want to establish ourselves as a catalyst to help our customers future-proof technology and help in early adoption of new solutions seamlessly.

If you have any questions about the CoE, you may reach out to them at COE_DATABASE@gavstech.com

CoE Team Members

  • Ashwin Kumar K
  • Ayesha Yasmin
  • Backiyalakshmi M
  • Dharmeswaran P
  • Gopinathan Sivasubramanian
  • Karthikeyan Rajasekaran
  • Lakshmi Kiran  
  • Manju Vellaichamy  
  • Manjunath Kadubayi  
  • Nagarajan A  
  • Nirosha Venkatesalu  
  • Praveen kumar Ralla  
  • Praveena M  
  • Rajesh Kumar Reddy Mannuru  
  • Satheesh Kumar K  
  • Sivagami R  
  • Subramanian Krishnan
  • Venkatesh Raghavendran

RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.

JAVA – Cache Management

Sivaprakash Krishnan

This article explores the offering of the various Java caching technologies that can play critical roles in improving application performance.

What is Cache Management?

A cache is a hot or a temporary memory buffer which stores most frequently used data like the live transactions, logical datasets, etc. This intensely improves the performance of an application, as read/write happens in the memory buffer thus reducing retrieval time and load on the primary source. Implementing and maintaining a cache in any Java enterprise application is important.

  • The client-side cache is used to temporarily store the static data transmitted over the network from the server to avoid unnecessarily calling to the server.
  • The server-side cache could be a query cache, CDN cache or a proxy cache where the data is stored in the respective servers instead of temporarily storing it on the browser.

Adoption of the right caching technique and tools allows the programmer to focus on the implementation of business logic; leaving the backend complexities like cache expiration, mutual exclusion, spooling, cache consistency to the frameworks and tools.

Caching should be designed specifically for the environment considering a single/multiple JVM and clusters. Given below multiple scenarios where caching can be used to improve performance.

1. In-process Cache – The In-process/local cache is the simplest cache, where the cache-store is effectively an object which is accessed inside the application process. It is much faster than any other cache accessed over a network and is strictly available only to the process that hosted it.

Data Center Consolidation Initiative Services

  • If the application is deployed only in one node, then in-process caching is the right candidate to store frequently accessed data with fast data access.
  • If the in-process cache is to be deployed in multiple instances of the application, then keeping data in-sync across all instances could be a challenge and cause data inconsistency.
  • An in-process cache can bring down the performance of any application where the server memory is limited and shared. In such cases, a garbage collector will be invoked often to clean up objects that may lead to performance overhead.

In-Memory Distributed Cache

Distributed caches can be built externally to an application that supports read/write to/from data repositories, keeps frequently accessed data in RAM, and avoid continuous fetching data from the data source. Such caches can be deployed on a cluster of multiple nodes, forming a single logical view.

  • In-memory distributed cache is suitable for applications running on multiple clusters where performance is key. Data inconsistency and shared memory aren’t matters of concern, as a distributed cache is deployed in the cluster as a single logical state.
  • As inter-process is required to access caches over a network, latency, failure, and object serialization are some overheads that could degrade performance.

2. In-memory database

In-memory database (IMDB) stores data in the main memory instead of a disk to produce quicker response times. The query is executed directly on the dataset stored in memory, thereby avoiding frequent read/writes to disk which provides better throughput and faster response times. It provides a configurable data persistence mechanism to avoid data loss.

Redis is an open-source in-memory data structure store used as a database, cache, and message broker. It offers data replication, different levels of persistence, HA, automatic partitioning that improves read/write.

Replacing the RDBMS with an in-memory database will improve the performance of an application without changing the application layer.

3. In-Memory Data Grid

An in-memory data grid (IMDG) is a data structure that resides entirely in RAM and is distributed among multiple servers.

Key features

  • Parallel computation of the data in memory
  • Search, aggregation, and sorting of the data in memory
  • Transactions management in memory
  • Event-handling

Cache Use Cases

There are use cases where a specific caching should be adapted to improve the performance of the application.

1. Application Cache

Application cache caches web content that can be accessed offline. Application owners/developers have the flexibility to configure what to cache and make it available for offline users. It has the following advantages:

  • Offline browsing
  • Quicker retrieval of data
  • Reduced load on servers

2. Level 1 (L1) Cache

This is the default transactional cache per session. It can be managed by any Java persistence framework (JPA) or object-relational mapping (ORM) tool.

The L1 cache stores entities that fall under a specific session and are cleared once a session is closed. If there are multiple transactions inside one session, all entities will be stored from all these transactions.

3. Level 2 (L2) Cache

The L2 cache can be configured to provide custom caches that can hold onto the data for all entities to be cached. It’s configured at the session factory-level and exists as long as the session factory is available.

  • Sessions in an application.
  • Applications on the same servers with the same database.
  • Application clusters running on multiple nodes but pointing to the same database.

4. Proxy / Load balancer cache

Enabling this reduces the load on application servers. When similar content is queried/requested frequently, proxy takes care of serving the content from the cache rather than routing the request back to application servers.

When a dataset is requested for the first time, proxy saves the response from the application server to a disk cache and uses them to respond to subsequent client requests without having to route the request back to the application server. Apache, NGINX, and F5 support proxy cache.

Desktop-as-a-Service (DaaS) Solution

5. Hybrid Cache

A hybrid cache is a combination of JPA/ORM frameworks and open source services. It is used in applications where response time is a key factor.

Caching Design Considerations

  • Data loading/updating
  • Performance/memory size
  • Eviction policy
  • Concurrency
  • Cache statistics.

1. Data Loading/Updating

Data loading into a cache is an important design decision to maintain consistency across all cached content. The following approaches can be considered to load data:

  • Using default function/configuration provided by JPA and ORM frameworks to load/update data.
  • Implementing key-value maps using open-source cache APIs.
  • Programmatically loading entities through automatic or explicit insertion.
  • External application through synchronous or asynchronous communication.

2. Performance/Memory Size

Resource configuration is an important factor in achieving the performance SLA. Available memory and CPU architecture play a vital role in application performance. Available memory has a direct impact on garbage collection performance. More GC cycles can bring down the performance.

3. Eviction Policy

An eviction policy enables a cache to ensure that the size of the cache doesn’t exceed the maximum limit. The eviction algorithm decides what elements can be removed from the cache depending on the configured eviction policy thereby creating space for the new datasets.

There are various popular eviction algorithms used in cache solution:

  • Least Recently Used (LRU)
  • Least Frequently Used (LFU)
  • First In, First Out (FIFO)

4. Concurrency

Concurrency is a common issue in enterprise applications. It creates conflict and leaves the system in an inconsistent state. It can occur when multiple clients try to update the same data object at the same time during cache refresh. A common solution is to use a lock, but this may affect performance. Hence, optimization techniques should be considered.

5. Cache Statistics

Cache statistics are used to identify the health of cache and provide insights about its behavior and performance. Following attributes can be used:

  • Hit Count: Indicates the number of times the cache lookup has returned a cached value.
  • Miss Count: Indicates number of times cache lookup has returned a null or newly loaded or uncached value
  • Load success count: Indicates the number of times the cache lookup has successfully loaded a new value.
  • Total load time: Indicates time spent (nanoseconds) in loading new values.
  • Load exception count: Number of exceptions thrown while loading an entry
  • Eviction count: Number of entries evicted from the cache

Various Caching Solutions

There are various Java caching solutions available — the right choice depends on the use case.

Software Test Automation Platform

At GAVS, we focus on building a strong foundation of coding practices. We encourage and implement the “Design First, Code Later” principle and “Design Oriented Coding Practices” to bring in design thinking and engineering mindset to build stronger solutions.

We have been training and mentoring our talent on cutting-edge JAVA technologies, building reusable frameworks, templates, and solutions on the major areas like Security, DevOps, Migration, Performance, etc. Our objective is to “Partner with customers to realize business benefits through effective adoption of cutting-edge JAVA technologies thereby enabling customer success”.

About the Author –

Sivaprakash is a solutions architect with strong solutions and design skills. He is a seasoned expert in JAVA, Big Data, DevOps, Cloud, Containers, and Micro Services. He has successfully designed and implemented a stable monitoring platform for ZIF. He has also designed and driven Cloud assessment/migration, enterprise BRMS, and IoT-based solutions for many of our customers. At present, his focus is on building ‘ZIF Business’ a new-generation AIOps platform aligned to business outcomes.