Customer Focus Realignment in a Pandemic Economy

Ashish Joseph

Business Environment Overview

The Pandemic Economy has created an environment that has tested businesses to either adapt or perish. The atmosphere has become a quest for the survival of the fittest. On the brighter side, organizations have stepped up and adapted to the crisis in a way that they have worked faster and better than ever before. 

During this crisis, companies have been strategic in understanding their focus areas and where to concentrate on the most. From a high-level perspective, we can see that businesses have focused on recovering the sources of their revenues, rebuilding operations, restructuring the organization, and accelerating their digital transformation initiatives. In a way, the pandemic has forced companies to optimize their strategies and harness their core competencies in a hyper-competitive and survival environment.

Need for Customer Focused Strategies

A pivotal and integral strategy to maintain and sustain growth is for businesses to avoid the churn of their existing customers and ensure the quality of delivery can build their trust for future collaborations and referrals. Many organizations, including GAVS, have understood that Customer Experience and Customer Success is consequential for customer retention and brand affinity. 

Businesses should realign themselves in the way they look at sales funnels. A large portion of the annual budget is usually allocated towards the top of the funnel activities to acquire more customers. But companies with customer success engraved in their souls, believe in the ideology that the bottom of the funnel feeds the top of the funnel. This strategy results in a self-sustaining and recurring revenue model for the business.

An independent survey conducted by the Customer Service Managers and Professionals Journal has found that companies pay 6x times more to acquire new customers than to keep an existing one. In this pandemic economy, the costs for customer acquisition will be much higher than before as organizations must be very frivolous in their spending. The best step forward is to make sure the companies strive for excellence in their customer experience and deliver measurable value to them. A study conducted by Bain and Company titled “Prescription for Cutting Costs” talks about how increasing customer retention by 5% increases profits from 25%-95%. 

The path to a sustainable and high growth business is to adopt customer-centric strategies that yield more value and growth for its customers. Enhancing customer experience should be prime and proper governance must be in place to monitor and gauge strategies. Governance in the world of the customer experience must revolve around identifying and managing resources needed to drive sustained actions, establishing robust procedures to organize processes, and ensuring a framework for stellar delivery.

Scaling to ever-changing customer needs

A research body called Walker Information conducted an independent research on B2B companies focusing on key initiatives that drive customer experiences and future growth. The study included various customer experience leaders, senior executives, and influencers representing a diverse set of business models in the industry. They published the report titled “Customer 2020: A Progress Report” and the following are strategies that best meet the changing needs of customers in the B2B landscape.

AI Devops Automation Service Tools

Over 45% of the leaders highlighted the importance of developing a customer-centric culture that simplifies products and processes for the business. Now the question that we need to ask ourselves is, how do we as an organization scale up to these demands of the market? I strongly believe that each of us, in the different roles we play in the organization, has an impact.

The Executive Team can support more customer experience strategies, formulate success metrics, measure the impact of customer success initiatives, and ensure alignment with respect to the corporate strategy.

The Client Partners can ensure that they represent the voice of the customer, plot a feasible customer experience roadmap, be on point with customer intelligence data, and ensure transparency and communication with the teams and the customers. 

The cross-functional team managers and members can own and execute process improvements, personalize and customize customer journeys, and monitor key delivery metrics.

When all these members work in unison, the target goal of delivery excellence coupled with customer success is always achievable.

Going Above and Beyond

Organizations should aim for customers who can be retained for life. The retention depends upon how much a business is willing to go the extra mile to add measurable value to its customers. Business contracts should evolve into partnerships that collaborate on their competitive advantages that bring solutions to real-world business problems. 

As customer success champions, we should reevaluate the possibilities in which we can make a difference for our customers. By focusing on our core competencies and using the latest tools in the market, we can look for avenues that can bring effort savings, productivity enhancements, process improvements, workflow optimizations, and business transformations that change the way our customers do business. 

After all, We are GAVS. We aim to galvanize a sense of measurable success through our committed teams and innovative solutions. We should always stride towards delivery excellence and strive for customer success in everything we do.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Patient 360 & Journey Mapping using Graph Technology

Srinivasan Sundararajan

360 Degree View of Patient

With rising demands for quality and cost-effective patient care, healthcare providers are focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. In other words, data-driven healthcare is augmenting human intelligence.

360 Degree View of Patient, as it is called, plays a major role in delivering the required information to the providers. It is a unified view of all the available information about a patient. It could include but is not limited to the following information:

  • Appointments made by the patients
  • Interaction with different doctors
  • Medications prescribed by the doctors
  • Patient’s relationship to other patients within the eco-systems specially to identify the family history related risks
  • Patient’s admission to hospitals or other healthcare facilities
  • Discharge and ongoing care
  • Patient personal wellness activities
  • Patient billing and insurance information
  • Linkages to the same patient in multiple disparate databases within the same hospital
  • Information about a patient’s involvement in various seminars, medical-related conferences, and other events

Limitations of Current Methods

As evident in most hospitals, these information are usually scattered across multiple data sources/databases. Hospitals typically create a data warehouse by consolidating information from multiple resources and try to create a unified database. However, this approach is done using relational databases and the relational databases rely on joining tables across entities to arrive at a complete picture. The RDBMS is not meant to handle relationships which extend to multiple hops and require drilling down to many levels.

Role of Graph Technology & Graph Databases

A graph database is a collection of nodes (or entities typically) and edges (or relationships). A node represents an entity (for example, a person or an organization) and an edge represents a relationship between the two nodes that it connects (for example, friends). Both nodes and edges may have properties associated with them.

While there are multiple graph databases in the market today like, Neo4J, JanusGraph, TigerGraph, the following technical discussions pertain to graph database that is part of SQL server 2019. The main advantage of this approach is that it helps utilize the best RDBMS features wherever applicable, while keeping the graph database options for complex relationships like 360 degree view of patients, making it a true polyglot persistence architecture.

As mentioned above, in SQL Server 2019 a graph database is a collection of node tables and edge tables. A node table represents an entity in a graph schema. An edge table represents a relationship in a graph. Edges are always directed and connect two nodes. An edge table enables users to model many-to-many relationships in the graph. Normal SQL Insert statements are used to create records into both node and edge tables.

While the node tables and edge tables represent the storage of graph data there are some specialized commands which act as extension of SQL and help with traversing between the nodes to get the full details like patient 360 degree data.

MATCH statement

MATCH statement links two node tables through a link table, such that complex relationships can be retrieved. An example,

Data Center Migration Planning Tools

SHORTEST_PATH statement

It finds the relationship path between two node tables by performing multiple hops recursively. It is one of the useful statements to find the 360 degree of a patient.

There are more options and statements as part of graph processing. Together it will help identify complex relationships across business entities and retrieve them.

GRAPH processing In Rhodium  

As mentioned in my earlier articles (Healthcare Data Sharing & Zero Knowledge Proofs in Healthcare Data Sharing), GAVS Rhodium framework enables Patient and Data Management and Patient Data Sharing and graph databases play a major part in providing patient 360 as well as for provider (doctor) credentialing data. The below screen shots show the samples from reference implementation.

Desktop-as-a-Service (DaaS) Solution

Patient Journey Mapping

Typically, a patient’s interaction with the healthcare service provider goes through a cycle of events. The goal of the provider organization is to make this journey smooth and provide the best care to the patients. It should be noted that not all patients go through this journey in a sequential manner, some may start the journey at a particular point and may skip some intermediate journey points. Proper data collection of events behind patient journey mapping will also help with the future prediction of events which will ultimately help with patient care.

Patient 360 data collection plays a major role in building the patient journey mapping. While there could be multiple definitions, the following is one of the examples of mapping between patient 360-degree events and patient journey mapping.

Digital Transformation Services and Solutions

The below diagram shows an example of a patient journey mapping information.

Enterprise IT Support Services USA

Understanding patients better is essential for improving patient outcomes. 360 degree of patients and patient journey mapping are key components for providing such insights. While traditional technologies lack the need of providing those links, graph databases and graph processing will play a major role in patient data management.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Enabling Success through Servant Leadership

Vasu

Vasudevan Gopalan

Servant Leadership – does it seem like a dichotomy? Well, it is not so. In this new age of Agile and Digital Transformation, this is a much sought-after trait in Leaders by their Organizations.

IT Infrastructure Managed Services

The goal of Servant Leadership is to Serve. It involves the leader supporting and empowering their teams and thus enabling Success. The paradigm shift in the thought process here is that – instead of the people working to serve the leader, the leader exists to serve the team. And do remember that a Servant Leader is a Servant first, Leader next – not the other way around 😊

In today’s Agile world of Software Delivery, the Scrum Master needs to be a Servant Leader.

So, what are the characteristics of a Servant Leader?

  • Self-aware
  • Humble
  • Integrity
  • Result-oriented
  • Has foresight
  • Listener
  • Doesn’t abuse authority
  • Intellectual authority
  • Collaborative
  • Trusting
  • Coach
  • Resolves conflict

As you can see here, it is all about achieving results through people empowerment. When people realize that their Leader helps every team member build a deep sense of community and belonging in the workplace, there is a higher degree of accountability and responsibility carried out in their work.

Ultimately, a Servant Leader wants to help others thrive, and is happy to put the team’s needs before their own. They care about people and understand that the best results are produced not through top-down delegation but by building people up. People need psychological safety and autonomy to be creative and innovative.

As Patrick Lencioni describes, Humility is one of the 3 main pillars for ideal team players. Humility is “the feeling or attitude that you have no special importance that makes you better than others”.

Behaviors of Humble Agile Servant Leaders

  • Deep listening and observing
  • Openness towards new ideas from team members
  • Appreciating strengths and contributions of team members
  • Seek contributions of team members to overcome challenges and limitations together
  • Be coachable coaches – i.e. Coach others, and simultaneously be easy to be coached by others

Humility’s foe – Arrogance

In Robert Hogan’s terms, arrogance makes “the most destructive leaders” and “is the critical factor driving flawed decision-makers” who “create the slippery slope to organizational failure”.

Humility in Practice

A study on the personality of CEOs of some of the top Fortune 1000 Companies shows that what makes these companies successful as they are is the CEOs’ humility. These CEOs share two sets of qualities seemingly contradictory but always back each other up strongly:

  • They are “self-effacing, quiet, reserved, even shy”. They are modest. And they admit mistakes.
  • At the same time, behind this reserved exterior, they are “fiercely ambitious, tremendously competitive, tenacious”. They have strong self-confidence and self-esteem. And they’re willing to listen to feedback and solicit input from knowledgeable subordinates.

According to Dr. Robert Hogan (2018), these characteristics of humility create “an environment of continuous improvement”.

What are the benefits of being a humble Servant Leader?

  • Increase inclusiveness – the foundation of trust
  • Strengthen the bond with peers – the basis of well-being
  • Deepen awareness
  • Improve empathy
  • Increase staff engagement

So, what do you think would be the outcomes for organizations that have practicing Servant Leaders?

Source:

https://www.bridge-global.com/blog/5-excellent-tips-to-become-a-supercharged-agile-leader/

About the Author –

Vasu heads the Engineering function for A&P. He is a Digital Transformation leader with ~20 years of IT industry experience spanning across Product Engineering, Portfolio Delivery, Large Program Management, etc. Vasu has designed and delivered Open Systems, Core Banking, Web / Mobile Applications, etc. Outside of his professional role, Vasu enjoys playing badminton and is a fitness enthusiast.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Artificial Intelligence in Healthcare

Dr. Ramjan Shaik

Scientific progress is about many small advancements and occasional big leaps. Medicine is no exception. In a time of rapid healthcare transformation, health organizations must quickly adapt to evolving technologies, regulations, and consumer demands. Since the inception of electronic health record (EHR) systems, volumes of patient data have been collected, creating an atmosphere suitable for translating data into actionable intelligence. The growing field of artificial intelligence (AI) has created new technology that can handle large data sets, solving complex problems that previously required human intelligence. AI integrates these data sources to develop new insights on individual health and public health.

Highly valuable information can sometimes get lost amongst trillions of data points, costing the industry around $100 billion a year. Providers must ensure that patient privacy is protected, and consider ways to find a balance between costs and potential benefits. The continued emphasis on cost, quality, and care outcomes will perpetuate the advancement of AI technology to realize additional adoption and value across healthcare. Although most organizations utilize structured data for analysis, valuable patient information is often “trapped” in an unstructured format. This type of data includes physician and patient notes, e-mails, and audio voice dictations. Unstructured data is frequently richer and more multifaceted. It may be more difficult to navigate, but unstructured data can lead to a plethora of new insights. Using AI to convert unstructured data to structured data enables healthcare providers to leverage automation and technology to enhance processes, reduce the staff required to monitor patients while filling gaps in healthcare labor shortages, lower operational costs, improve patient care, and monitor the AI system for challenges.

AI is playing a significant role in medical imaging and clinical practice. Providers and healthcare organizations have recognized the importance of AI and are tapping into intelligence tools. Growth in the AI health market is expected to reach $6.6 billion by 2021 and to exceed $10 billion by 2024.  AI offers the industry incredible potential to learn from past encounters and make better decisions in the future. Algorithms could standardize tests, prescriptions, and even procedures across the healthcare system, being kept up-to-date with the latest guidelines in the same way a phone’s operating system updates itself from time to time.

There are three main areas where AI efforts are being invested in the healthcare sector.

  • Engagement – This involves improvising on how patients interact with healthcare providers and systems.
  • Digitization – AI and other digital tools are expected to make operations more seamless and cost-effective.
  • Diagnostics – By using products and services that use AI algorithms diagnosis and patient care can be improved.

AI will be most beneficial in three other areas namely physician’s clinical judgment and diagnosis, AI-assisted robotic surgery, and virtual nursing assistants.

Following are some of the scenarios where AI makes a significant impact in healthcare:

  • AI can be utilized to provide personalized and interactive healthcare, including anytime face-to-face appointments with doctors. AI-powered chatbots can be powered with technology to review the patient symptoms and recommend whether a virtual consultation or a face-to-face visit with a healthcare professional is necessary.
  • AI can enhance the efficiency of hospitals and clinics in managing patient data, clinical history, and payment information by using predictive analytics. Hospitals are using AI to gather information on trillions of administrative and health record data points to streamline the patient experience. This collaboration of AI and data helps hospitals/clinics to personalize healthcare plans on an individual basis.
  • A taskforce augmented with artificial intelligence can quickly prioritize hospital activity for the benefit of all patients. Such projects can improve hospital admission and discharge procedures, bringing about enhanced patient experience.
  • Companies can use algorithms to scrutinize huge clinical and molecular data to personalize healthcare treatments by developing AI tools that collect and analyze data from genetic sequencing to image recognition empowering physicians in improved patient care. AI-powered image analysis helps in connecting data points that support cancer discovery and treatment.
  • Big data and artificial intelligence can be used in combination to predict clinical, financial, and operational risks by taking data from all the existing sources. AI analyzes data throughout a healthcare system to mine, automate, and predict processes. It can be used to predict ICU transfers, improve clinical workflows, and even pinpoint a patient’s risk of hospital-acquired infections. Using artificial intelligence to mine health data, hospitals can predict and detect sepsis, which ultimately reduces death rates.
  • AI helps healthcare professionals harness their data to optimize hospital efficiency, better engage with patients, and improve treatment. AI can notify doctors when a patient’s health deteriorates and can even help in the diagnosis of ailments by combing its massive dataset for comparable symptoms. By collecting symptoms of a patient and inputting them into the AI platform, doctors can diagnose quickly and more effectively.   
  • Robot-assisted surgeries ranging from minimally-invasive procedures to open-heart surgeries enables doctors to perform procedures with precision, flexibility, and control that goes beyond human capabilities, leading to fewer surgery-related complications, less pain, and a quicker recovery time. Robots can be developed to improve endoscopies by employing the latest AI techniques which helps doctors get a clearer view of a patient’s illness from both a physical and data perspective.

Having understood the advancements of AI in various facets of healthcare, it is to be realized that AI is not yet ready to fully interpret a patient’s nuanced response to a question, nor is it ready to replace examining patients – but it is efficient in making differential diagnoses from clinical results. It is to be understood very clearly that the role of AI in healthcare is to supplement and enhance human judgment, not to replace physicians and staff.

We at GAVS Technologies are fully equipped with cutting edge AI technology, skills, facilities, and manpower to make a difference in healthcare.

Following are the ongoing and in-pipeline projects that we are working on in healthcare:

ONGOING PROJECT:

AI Devops Automation Service Tools

PROJECTS IN PIPELINE:

AIOps Artificial Intelligence for IT Operations
AIOps Digital Transformation Solutions
Best AI Auto Discovery Tools
Best AIOps Platforms Software

Following are the projects that are being planned:

  • Controlling Alcohol Abuse
  • Management of Opioid Addiction
  • Pharmacy Support – drug monitoring and interactions
  • Reducing medication errors in hospitals
  • Patient Risk Scorecard
  • Patient Wellness – Chronic Disease management and monitoring

In conclusion, it is evident that the Advent of AI in the healthcare domain has shown a tremendous impact on patient treatment and care. For more information on how our AI-led solutions and services can help your healthcare enterprise, please reach out to us here.

About the Author –

Dr. Ramjan is a Data Analyst at GAVS. He has a Doctorate degree in the field of Pharmacy. He is passionate about drawing insights out of raw data and considers himself to be a ‘Data Person’.

He loves what he does and tries to make the most of his work. He is always learning something new from programming, data analytics, data visualization to ML, AI, and more.

Center of Excellence – Big Data

The Big Data CoE is a team of experts that experiments and builds various cutting-edge solutions by leveraging the latest technologies, like Hadoop, Spark, Tensor-flow, and emerging open-source technologies, to deliver robust business results. A CoE is where organizations identify new technologies, learn new skills, and develop appropriate processes that are then deployed into the business to accelerate adoption.

Leveraging data to drive competitive advantage has shifted from being an option to a requirement for hyper competitive business landscape. One of the main objectives of the CoE is deciding on the right strategy for the organization to become data-driven and benefit from a world of Big Data, Analytics, Machine Learning and the Internet of Things (IoT).

Cloud Migration Assessment Tool for Business
Triple Constraints of Projects

“According to Chaos Report, 52% of the projects are either delivered late or run over the allocated. The average across all companies is 189% of the original cost estimate. The average cost overrun is 178% for large companies, 182% for medium companies, and 214% for small companies. The average overrun is 222% of the original time estimate. For large companies, the average is 230%; for medium companies, the average is 202%; and for small companies, the average is 239%.”

Big Data CoE plays a vital role in bringing down the cost and reducing the response time to ensure project is delivered on time by helping the organization to build the skillful resources.

Big Data’s Role

Helping the organization to build quality big data applications on their own by maximizing their ability to leverage data. Data engineers are committed to helping ensure the data:

  • define your strategic data assets and data audience
  • gather the required data and put in place new collection methods
  • get the most from predictive analytics and machine learning
  • have the right technology, data infrastructure, and key data competencies
  • ensure you have an effective security and governance system in place to avoid huge financial, legal, and reputational problems.
Cyber Security and Compliance Services

Data Analytics Stages

Architecture optimized building blocks covering all data analytics stages: data acquisition from a data source, preprocessing, transformation, data mining, modeling, validation, and decision making.

Cyber Security Mdr Services

Focus areas

Algorithms support the following computation modes:

  • Batch processing
  • Online processing
  • Distributed processing
  • Stream processing

The Big Data analytics lifecycle can be divided into the following nine stages:

  • Business Case Evaluation
  • Data Identification
  • Data Acquisition & Filtering
  • Data Extraction
  • Data Validation & Cleansing
  • Data Aggregation & Representation
  • Data Analysis
  • Data Visualization
  • Utilization of Analysis Results

A key focus of Big-data CoE is to establish a data-driven organization by developing proof of concept with the latest technologies with Big Data and Machine learning models. As of part of CoE initiatives, we are involved in developing the AI widgets to various market places, such as Azure, AWS, Magento and others. We are also actively involved in engaging and motivating the team to learn cutting edge technologies and tools like Apache Spark and Scala. We encourage the team to approach each problem in a pragmatic way by making them understand the latest architectural patterns over the traditional MVC methods.

It has been established that business-critical decisions supported by data-driven insights have been more successful. We aim to take our organization forward by unleashing the true potential of data!

If you have any questions about the CoE, you may reach out to them at SME_BIGDATA@gavstech.com

CoE Team Members

  • Abdul Fayaz
  • Adithyan CR
  • Aditya Narayan Patra
  • Ajay Viswanath V
  • Balakrishnan M
  • Bargunan Somasundaram
  • Bavya V
  • Bipin V
  • Champa N
  • Dharmeswaran P
  • Diamond Das
  • Inthazamuddin K
  • Kadhambari Manoharan
  • Kalpana Ashokan
  • Karthikeyan K
  • Mahaboobhee Mohamedfarook
  • Manju Vellaichamy
  • Manojkumar Rajendran
  • Masthan Rao Yenikapati
  • Nagarajan A
  • Neelagandan K
  • Nithil Raj Tharammal Paramb
  • Radhika M
  • Ramesh Jayachandar
  • Ramesh Natarajan
  • Ruban Salamon
  • Senthil Amarnath
  • T Mohammed Anas Aadil
  • Thulasi Ram G
  • Vijay Anand Shanmughadass
  • Vimalraj Subash

Center of Excellence – Database

Data Center as a Service Providers in USA

“During the World War II, there was a time when the Germans winning on every front and the fear of Hitler taking over the world was looming. At that point in time, had the Allies not taken drastic measures and invested in ground-breaking technologies such as radars, aircraft, atomic energy, etc., the world would have been starkly different from what it is today.

Even in today’s world, the pace at which things are changing is incredible. The evolution of technology is unstoppable, and companies must be ready. There is an inherent need for them to differentiate themselves by providing solutions that showcase a deep understanding of domain and technology to address evolving customer expectations. What becomes extremely important for companies is to establish themselves as incubators of innovation and possess the ability to constantly innovate and fail fast. Centers of Excellence can be an effective solution to address these challenges.

“An Organisation’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage”

  • Jack Welch, former Chairman and CEO of General Electric

The Database CoE was formed with a mission to groom, enhance and incubate talents within GAVS to stay abreast of the evolving technology landscape and help our customers with cutting edge technology solutions.

We identify the expert and the requirements across all customer engagements within GAVS. Regular connects and technology sessions ensure everyone in the CoE is learning at least one new topic in a week. Below is our charter and roadmap by priority:

Data Center Consolidation Initiative Services

Data Center Migration Planning Tools

Database CoE is focused on assisting our customers in every stage of the engagement right from on-boarding, planning, execution with consultative approach and a futuristic mindset. With above primary goals we are currently working on below initiatives:

Competency Building

When we help each other and stand together we evolve to be the strongest.

Continuous learning is an imperative in the current times. Our fast-paced trainings on project teams is an alternate to the primitive classroom sessions. We believe true learning happen when you are working on it hands-on. With this key aspect in mind, we divide the teams in smaller groups and map them to projects to get larger exposure and gain from experience.

This started off with a pilot with an ISP provider where we trained 4 CoE members in Azure and Power BI within a span of 2 months.

Desktop-as-a-Service (DaaS) Solution

Database Maturity Assessment

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly “

  • George Westerman, research scientist at the MIT Center for Digital Business

Why Bother with a Database Assessment?

We often know we have a problem and can visualize the ideal state we want our technology solution to get us to.  However, it is challenging to figure out how to get there because it’s easy to confuse the symptoms with the cause of a problem. Thus, you end up solving the ‘symptom’ with a (potentially expensive) piece of technology that is ill-equipped to address the underlying cause.

We offer a structured process to assess your current database estate and select a technology solution helps you get around this problem, reduce risks and fast track the path to your true objective with futureproofing, by forcing you to both identify the right problem and solve it the right way.

Assessment Framework

Digital Service Desk AI Software

Below are the three key drivers powering the assessment.

Accelerated Assessment:

  • Automated assessment and benchmark of existing and new database estates against industry best practices and standards.
  • Analyze & Finetune
    • Analyze assessment findings and implement recommendations on performance, consistency, and security aspect
  • NOC+ZERO TOUCH L2
    • Shift Left and Automate L1/L2 Service requests and incidents with help of Database COE- Automation experts

As we progress on our journey, we want to establish ourselves as a catalyst to help our customers future-proof technology and help in early adoption of new solutions seamlessly.

If you have any questions about the CoE, you may reach out to them at COE_DATABASE@gavstech.com

CoE Team Members

  • Ashwin Kumar K
  • Ayesha Yasmin
  • Backiyalakshmi M
  • Dharmeswaran P
  • Gopinathan Sivasubramanian
  • Karthikeyan Rajasekaran
  • Lakshmi Kiran  
  • Manju Vellaichamy  
  • Manjunath Kadubayi  
  • Nagarajan A  
  • Nirosha Venkatesalu  
  • Praveen kumar Ralla  
  • Praveena M  
  • Rajesh Kumar Reddy Mannuru  
  • Satheesh Kumar K  
  • Sivagami R  
  • Subramanian Krishnan
  • Venkatesh Raghavendran

RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.

Business with a Heart

Balaji Uppili

People and technology are converging like never before, as the world is gripped by COVID – 19. Just a few months ago, nobody could have predicted or foreseen the way businesses are having to work today.  As we were strategizing on corporate governance, digital transformation and the best of resiliency plans to ensure business continuity, no one ever anticipated the scale and enormity of COVID 19.

Today, it has become obvious that COVID 19 has brought about the convergence of technology and humanity and how it can change the way businesses work and function.  While we as leaders have been thinking largely about business outcomes, this pandemic has triggered a more humane approach, and the approach is here to stay.  The humane approach will be the differentiator and will prove the winner.

There is no doubt that this pandemic has brought an urgent need to accelerate our digital capabilities. With the focus on strong IT infrastructure and remote working, workforces were able to transition to working from home, meeting through video conferencing, and surprisingly, this has turned to increase the humane aspect of business relations – it has now become alright for both parties to be seeing children, spouses or pets in meeting backgrounds, and that in itself has broken down huge barriers and formalities.  It is refreshing to see the emerging empathy that is getting stronger with every meeting, and increasing collaboration and communication. It is becoming increasingly clear that we have overlooked the important factor of how it is that people have been showing up to work.  Suddenly it is now more visible that people have equally strong roles within the family – when we see parents having to home-school their children, or having other care obligations, we are viewing their personal lives and are able to empathize with them more.  We are seeing the impact that business can have on people and their personal lives and this is a never like before opportunity for leaders to put our people first.

And with customers being the center of every business, the situation of not being able to do in-person meetings has now warranted newer ways to collaborate and further strengthen the customer-centricity initiatives even more.  It has become evident that no matter how much we as leaders are thinking of automating operations, it is human connections that run businesses successfully. Lots of things have been unraveled – Important business imperatives like criticality of clean workspace compliance, the fact that offshoring thousands of miles away is not factually a compromise, but a very cost-effective and efficient way of getting things done. Productivity has also increased, and work done this far by, has a positive impact of at least 20% or even more in certain situations. As boundaries and barriers are broken, the rigidities of who should work on something and when they should work on it have all become less rigid.  Employees are less regimental about time.  Virtual crowd outsourcing has become the norm – you throw an idea at a bunch of people and whoever has the ability and the bandwidth to handle the task takes care of it, instead of a formal task assignment, and this highlights the fungibility of people.

All in all, the reset in the execution processes and introducing much more of a humane approach is here to stay and make the new norm even more exciting.

About the Author –

Balaji has over 25 years of experience in the IT industry, across multiple verticals. His enthusiasm, energy, and client focus is a rare gift, and he plays a key role in bringing new clients into GAVS. Balaji heads the Delivery department and passionately works on Customer delight. He says work is worship for him and enjoys watching cricket, listening to classical music, and visiting temples.

JAVA – Cache Management

Sivaprakash Krishnan

This article explores the offering of the various Java caching technologies that can play critical roles in improving application performance.

What is Cache Management?

A cache is a hot or a temporary memory buffer which stores most frequently used data like the live transactions, logical datasets, etc. This intensely improves the performance of an application, as read/write happens in the memory buffer thus reducing retrieval time and load on the primary source. Implementing and maintaining a cache in any Java enterprise application is important.

  • The client-side cache is used to temporarily store the static data transmitted over the network from the server to avoid unnecessarily calling to the server.
  • The server-side cache could be a query cache, CDN cache or a proxy cache where the data is stored in the respective servers instead of temporarily storing it on the browser.

Adoption of the right caching technique and tools allows the programmer to focus on the implementation of business logic; leaving the backend complexities like cache expiration, mutual exclusion, spooling, cache consistency to the frameworks and tools.

Caching should be designed specifically for the environment considering a single/multiple JVM and clusters. Given below multiple scenarios where caching can be used to improve performance.

1. In-process Cache – The In-process/local cache is the simplest cache, where the cache-store is effectively an object which is accessed inside the application process. It is much faster than any other cache accessed over a network and is strictly available only to the process that hosted it.

Data Center Consolidation Initiative Services

  • If the application is deployed only in one node, then in-process caching is the right candidate to store frequently accessed data with fast data access.
  • If the in-process cache is to be deployed in multiple instances of the application, then keeping data in-sync across all instances could be a challenge and cause data inconsistency.
  • An in-process cache can bring down the performance of any application where the server memory is limited and shared. In such cases, a garbage collector will be invoked often to clean up objects that may lead to performance overhead.

In-Memory Distributed Cache

Distributed caches can be built externally to an application that supports read/write to/from data repositories, keeps frequently accessed data in RAM, and avoid continuous fetching data from the data source. Such caches can be deployed on a cluster of multiple nodes, forming a single logical view.

  • In-memory distributed cache is suitable for applications running on multiple clusters where performance is key. Data inconsistency and shared memory aren’t matters of concern, as a distributed cache is deployed in the cluster as a single logical state.
  • As inter-process is required to access caches over a network, latency, failure, and object serialization are some overheads that could degrade performance.

2. In-memory database

In-memory database (IMDB) stores data in the main memory instead of a disk to produce quicker response times. The query is executed directly on the dataset stored in memory, thereby avoiding frequent read/writes to disk which provides better throughput and faster response times. It provides a configurable data persistence mechanism to avoid data loss.

Redis is an open-source in-memory data structure store used as a database, cache, and message broker. It offers data replication, different levels of persistence, HA, automatic partitioning that improves read/write.

Replacing the RDBMS with an in-memory database will improve the performance of an application without changing the application layer.

3. In-Memory Data Grid

An in-memory data grid (IMDG) is a data structure that resides entirely in RAM and is distributed among multiple servers.

Key features

  • Parallel computation of the data in memory
  • Search, aggregation, and sorting of the data in memory
  • Transactions management in memory
  • Event-handling

Cache Use Cases

There are use cases where a specific caching should be adapted to improve the performance of the application.

1. Application Cache

Application cache caches web content that can be accessed offline. Application owners/developers have the flexibility to configure what to cache and make it available for offline users. It has the following advantages:

  • Offline browsing
  • Quicker retrieval of data
  • Reduced load on servers

2. Level 1 (L1) Cache

This is the default transactional cache per session. It can be managed by any Java persistence framework (JPA) or object-relational mapping (ORM) tool.

The L1 cache stores entities that fall under a specific session and are cleared once a session is closed. If there are multiple transactions inside one session, all entities will be stored from all these transactions.

3. Level 2 (L2) Cache

The L2 cache can be configured to provide custom caches that can hold onto the data for all entities to be cached. It’s configured at the session factory-level and exists as long as the session factory is available.

  • Sessions in an application.
  • Applications on the same servers with the same database.
  • Application clusters running on multiple nodes but pointing to the same database.

4. Proxy / Load balancer cache

Enabling this reduces the load on application servers. When similar content is queried/requested frequently, proxy takes care of serving the content from the cache rather than routing the request back to application servers.

When a dataset is requested for the first time, proxy saves the response from the application server to a disk cache and uses them to respond to subsequent client requests without having to route the request back to the application server. Apache, NGINX, and F5 support proxy cache.

Desktop-as-a-Service (DaaS) Solution

5. Hybrid Cache

A hybrid cache is a combination of JPA/ORM frameworks and open source services. It is used in applications where response time is a key factor.

Caching Design Considerations

  • Data loading/updating
  • Performance/memory size
  • Eviction policy
  • Concurrency
  • Cache statistics.

1. Data Loading/Updating

Data loading into a cache is an important design decision to maintain consistency across all cached content. The following approaches can be considered to load data:

  • Using default function/configuration provided by JPA and ORM frameworks to load/update data.
  • Implementing key-value maps using open-source cache APIs.
  • Programmatically loading entities through automatic or explicit insertion.
  • External application through synchronous or asynchronous communication.

2. Performance/Memory Size

Resource configuration is an important factor in achieving the performance SLA. Available memory and CPU architecture play a vital role in application performance. Available memory has a direct impact on garbage collection performance. More GC cycles can bring down the performance.

3. Eviction Policy

An eviction policy enables a cache to ensure that the size of the cache doesn’t exceed the maximum limit. The eviction algorithm decides what elements can be removed from the cache depending on the configured eviction policy thereby creating space for the new datasets.

There are various popular eviction algorithms used in cache solution:

  • Least Recently Used (LRU)
  • Least Frequently Used (LFU)
  • First In, First Out (FIFO)

4. Concurrency

Concurrency is a common issue in enterprise applications. It creates conflict and leaves the system in an inconsistent state. It can occur when multiple clients try to update the same data object at the same time during cache refresh. A common solution is to use a lock, but this may affect performance. Hence, optimization techniques should be considered.

5. Cache Statistics

Cache statistics are used to identify the health of cache and provide insights about its behavior and performance. Following attributes can be used:

  • Hit Count: Indicates the number of times the cache lookup has returned a cached value.
  • Miss Count: Indicates number of times cache lookup has returned a null or newly loaded or uncached value
  • Load success count: Indicates the number of times the cache lookup has successfully loaded a new value.
  • Total load time: Indicates time spent (nanoseconds) in loading new values.
  • Load exception count: Number of exceptions thrown while loading an entry
  • Eviction count: Number of entries evicted from the cache

Various Caching Solutions

There are various Java caching solutions available — the right choice depends on the use case.

Software Test Automation Platform

At GAVS, we focus on building a strong foundation of coding practices. We encourage and implement the “Design First, Code Later” principle and “Design Oriented Coding Practices” to bring in design thinking and engineering mindset to build stronger solutions.

We have been training and mentoring our talent on cutting-edge JAVA technologies, building reusable frameworks, templates, and solutions on the major areas like Security, DevOps, Migration, Performance, etc. Our objective is to “Partner with customers to realize business benefits through effective adoption of cutting-edge JAVA technologies thereby enabling customer success”.

About the Author –

Sivaprakash is a solutions architect with strong solutions and design skills. He is a seasoned expert in JAVA, Big Data, DevOps, Cloud, Containers, and Micro Services. He has successfully designed and implemented a stable monitoring platform for ZIF. He has also designed and driven Cloud assessment/migration, enterprise BRMS, and IoT-based solutions for many of our customers. At present, his focus is on building ‘ZIF Business’ a new-generation AIOps platform aligned to business outcomes.