Happy Birthday MLK – My ode to the Free Thinkers, Disruptors, and Iconoclasts

Sumit Ganguli

CEO, GAVS Technologies

While we were gearing up for the weekend, I noticed that Monday, January 18, is Rev. Martin Luther King Jr.’s birth anniversary. This coupled with the overcast sky and cool winter day all conspired to make me sit back and reminisce about the events of the past few months.

Working from home, I have become accustomed to keeping my TV on mute, alternating between CNN and Fox News while I go through my emails, video conferences and other work routines. And that is when I saw the traumatic video of George Floyd’s death in Minneapolis and the massive demonstrations that ensued across the US and in other parts of the world. The Black Lives Matter movement rightfully gained immense momentum and soon #BlackLivesMatter became one of the most trending of all hashtags.

An avid tennis fan, I got to watch the US Open on TV this year, being played without any spectators.  But I was most inspired by the young Japanese tennis player, Naomi Osaka who went on to win the US Open and decided to draw attention to the #BLM by wearing the names of seven black victims who were being memorialized by the BLM movement. She succeeded in persuading me to read more about the movement and many of the victims.    

Cut to the present, we now have our first Black Vice President elect Kamala Harris who is of Jamaican and Indian heritage. Just the other day, my 90-year-old mother who is in Bangalore and is quite a political junkie, challenged me to name the Indian lady who was announced to be a member of Mr. Joe Biden’s economic committee. Convinced that my Mother was mistaken, I told her that Janet Yellen was not Indian. But she insisted and then I recalled that Ms. Neera Tanden has been nominated to head the Office of Management and Budget.

The Indian diaspora has been deservedly proud of the achievements of the Indian leaders in America – Satya Nadella, Microsoft; Arvind Krishna, IBM; Ajay Banga, Mastercard; Nandita Bakshi, Bank of the West & Federal Reserve Bank; Sanat Chattopadhyay, Merck; Niren Chaudhury, Panera Bread – and with Reverend Martin Luther King’s birth anniversary round the corner, I think it is  opportune for us to celebrate the avantgarde Free thinkers, Disruptors, and Iconoclasts, who made this possible for some of this happen.  

In the morning, I bathe my intellect in the stupendous and cosmogonal philosophy of the Bhagvat Geeta, since whose composition years of the Gods have elapsed, and in comparison with which our modern world and its literature seem puny and trivial…The pure Walden water is mingled with the sacred water of the Ganges” (Thoreau, Walden).

In 1854’s Boston, Henry David Thoreau and Ralph Waldo Emerson, derived a lot of their concept of Transcendentalism, Non-Violence, and Civil Disobedience from the concepts of Ahimsa and Dharma from the ancient Indian scriptures, the Upanishads and the Gita. They read this at the Harvard Library and wrote extensively about it.

In 1893, a man got thrown out of a train in South Africa, which led him to take on the mighty British and launch his Satyagraha movement to fight for India’s independence. His movement in turn was highly influenced by Thoreau’s Civil Disobedience. That man, of course is known around the world as Mahatma Gandhi. 

From 1954 to 1968, Rev. Martin Luther King Jr. and other activists launched the Civil Rights Movement in America. He drew inspiration from Gandhi’s philosophy who has been immortalized as the Father of the Nation in India. This is truly a circle of ideas that traversed oceans and continents.

Today, we are all beneficiaries of largesse of the thoughts and visions of these great luminaries. On MLK’s birthday, Monday, January 18, I believe we will be well served to pay our ode to the Reverend and his fellow free thinkers John Lewis, Rosa Parks, and many others for their audacious vision, temerity, and currency of ideas and ideals –  for these disruptors, iconoclasts made it possible for us to live the life of our dreams in America, a country that we have come to love and cherish.  

Patient Segmentation Using Data Mining Techniques

Srinivasan Sundararajan

Srinivasan Sundararajan

Patient Segmentation & Quality Patient Care

As the need for quality and cost-effective patient care increases, healthcare providers are increasingly focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. Simply put, data-driven healthcare is augmenting the human intelligence based on experience and knowledge.

Segmentation is the standard technique used in Retail, Banking, Manufacturing, and other industries that needs to understand their customers to provide better customer service. Customer segmentation defines the behavioral and descriptive profiles of customers. These profiles are then used to provide personalized marketing programs and strategies for each group.

In a way, patients are like customers to healthcare providers. Though the element of quality of care takes precedence than profit-making intention, a similar segmentation of patients will immensely benefit the healthcare providers, mainly for the following reasons:

  • Customizing the patient care based on their behavior profiles
  • Enabling a stronger patient engagement
  • Providing the backbone for data-driven decisions on patient profile
  • Performing advanced medical research like launching a new vaccine or trial

The benefits are obvious and individual hospitals may add more points to the above list; the rest of the article is about how to perform the patient segmentation using data mining techniques.

Data Mining for Patient Segmentation

In Data Mining a, segmentation or clustering algorithm will iterate over cases in a dataset to group them into clusters that contain similar characteristics. These groupings are useful for exploring data, identifying anomalies in the data, and creating predictions. Clustering is an unsupervised data mining (machine learning) technique used for grouping the data elements without advance knowledge of the group definitions.

K-means clustering is a well-known method of assigning cluster membership by minimizing the differences among items in a cluster while maximizing the distance between clusters. Clustering algorithm first identifies relationships in a dataset and generates a series of clusters based on those relationships. A scatter plot is a useful way to visually represent how the algorithm groups data, as shown in the following diagram. The scatter plot represents all the cases in the dataset, and each case is a point on the graph. The cluster points on the graph illustrate the relationships that the algorithm identifies.

AIOps Artificial Intelligence for IT Operations

One of the important parameters for a K-Means algorithm is the number of clusters or the cluster count. We need to set this to a value that is meaningful to the business problem that needs to be solved. However, there is good support in the algorithm to find the optimal number of clusters for a given data set, as explained next.

To determine the number of clusters for the algorithm to use, we can use a plot of the within cluster’s sum of squares, by the number of clusters extracted. The appropriate number of clusters to use is at the bend or ‘elbow’ of the plot. The Elbow Method is one of the most popular methods to determine this optimal value of k i.e. the number of clusters. The following code creates a curve.

AIOps Digital Transformation Solutions
AI Devops Automation Service Tools

In this example, based on the graph, it looks like k = 4 would be a good value to try.

Reference Patient Segmentation Using K-Means Algorithm in GAVS Rhodium Platform

In GAVS Rhodium Platform, which helps healthcare providers with Patient Data Management and Patient Data Sharing, there is a reference implementation of Patient Segmentation using K-Means algorithm. The following are the attributes that are used based on a publicly available Patient admit data (no personal information used in this data set). Again in the reference implementation sample attributes are used and in a real scenario consulting with healthcare practitioners will help to identify the correct attributes that is used for clustering.

 To prepare the data for clustering patients, patients must be separated along the following dimensions:

  • HbA1c: Measuring the glycated form of hemoglobin to obtain the three-month average of blood sugar.
  • Triglycerides: Triglycerides are the main constituents of natural fats and oils. This test indicates the amount of fat or lipid found in the blood.
  • FBG: Fasting Plasma Glucose test measures the amount of glucose levels present in the blood.
  • Systolic: Blood Pressure is the pressure of circulating blood against the walls of Blood Vessels. This test relates to the phase of the heartbeat when the heart muscle contracts and pumps blood from the chambers into the arteries.
  • Diastolic: The diastolic reading is the pressure in the arteries when the heart rests between beats.
  • Insulin: Insulin is a hormone that helps move blood sugar, known as glucose, from your bloodstream into your cells. This test measures the amount of insulin in your blood.
  • HDL-C: Cholesterol is a fat-like substance that the body uses as a building block to produce hormones. HDL-C or good cholesterol consists primarily of protein with a small amount of cholesterol. It is considered to be beneficial because it removes excess cholesterol from tissues and carries it to the liver for disposal. The test for HDL cholesterol measures the amount of HDL-C in blood.
  • LDL-C: LDL-C or bad cholesterol present in the blood as low-density lipoprotein, a relatively high proportion of which is associated with a higher risk of coronary heart disease. This test measures the LDL-C present in the blood.
  • Weight: This test indicates the heaviness of the patient.

The above tests are taken for the patients during the admission process.

The following is the code snippet behind the scenes which create the patient clustering.

Best AIOps Platforms Software

The below is the output cluster created from the above algorithm.

Just from this sample, healthcare providers can infer the patient behavior and patterns based on their creatinine and glucose levels, in real-life situations other different attributes can be used.

AI will play a major role in future healthcare data management and decision making and data mining algorithms like K-Means provide an option to segment the patients based on the attributes which will improve the quality of patient care.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoprability), while bringing more trust and transparency into the healthcare process using patient consent management, credentialing and zero knowledge proofs.

Customer Focus Realignment in a Pandemic Economy

Ashish Joseph

Business Environment Overview

The Pandemic Economy has created an environment that has tested businesses to either adapt or perish. The atmosphere has become a quest for the survival of the fittest. On the brighter side, organizations have stepped up and adapted to the crisis in a way that they have worked faster and better than ever before. 

During this crisis, companies have been strategic in understanding their focus areas and where to concentrate on the most. From a high-level perspective, we can see that businesses have focused on recovering the sources of their revenues, rebuilding operations, restructuring the organization, and accelerating their digital transformation initiatives. In a way, the pandemic has forced companies to optimize their strategies and harness their core competencies in a hyper-competitive and survival environment.

Need for Customer Focused Strategies

A pivotal and integral strategy to maintain and sustain growth is for businesses to avoid the churn of their existing customers and ensure the quality of delivery can build their trust for future collaborations and referrals. Many organizations, including GAVS, have understood that Customer Experience and Customer Success is consequential for customer retention and brand affinity. 

Businesses should realign themselves in the way they look at sales funnels. A large portion of the annual budget is usually allocated towards the top of the funnel activities to acquire more customers. But companies with customer success engraved in their souls, believe in the ideology that the bottom of the funnel feeds the top of the funnel. This strategy results in a self-sustaining and recurring revenue model for the business.

An independent survey conducted by the Customer Service Managers and Professionals Journal has found that companies pay 6x times more to acquire new customers than to keep an existing one. In this pandemic economy, the costs for customer acquisition will be much higher than before as organizations must be very frivolous in their spending. The best step forward is to make sure the companies strive for excellence in their customer experience and deliver measurable value to them. A study conducted by Bain and Company titled “Prescription for Cutting Costs” talks about how increasing customer retention by 5% increases profits from 25%-95%. 

The path to a sustainable and high growth business is to adopt customer-centric strategies that yield more value and growth for its customers. Enhancing customer experience should be prime and proper governance must be in place to monitor and gauge strategies. Governance in the world of the customer experience must revolve around identifying and managing resources needed to drive sustained actions, establishing robust procedures to organize processes, and ensuring a framework for stellar delivery.

Scaling to ever-changing customer needs

A research body called Walker Information conducted an independent research on B2B companies focusing on key initiatives that drive customer experiences and future growth. The study included various customer experience leaders, senior executives, and influencers representing a diverse set of business models in the industry. They published the report titled “Customer 2020: A Progress Report” and the following are strategies that best meet the changing needs of customers in the B2B landscape.

AI Devops Automation Service Tools

Over 45% of the leaders highlighted the importance of developing a customer-centric culture that simplifies products and processes for the business. Now the question that we need to ask ourselves is, how do we as an organization scale up to these demands of the market? I strongly believe that each of us, in the different roles we play in the organization, has an impact.

The Executive Team can support more customer experience strategies, formulate success metrics, measure the impact of customer success initiatives, and ensure alignment with respect to the corporate strategy.

The Client Partners can ensure that they represent the voice of the customer, plot a feasible customer experience roadmap, be on point with customer intelligence data, and ensure transparency and communication with the teams and the customers. 

The cross-functional team managers and members can own and execute process improvements, personalize and customize customer journeys, and monitor key delivery metrics.

When all these members work in unison, the target goal of delivery excellence coupled with customer success is always achievable.

Going Above and Beyond

Organizations should aim for customers who can be retained for life. The retention depends upon how much a business is willing to go the extra mile to add measurable value to its customers. Business contracts should evolve into partnerships that collaborate on their competitive advantages that bring solutions to real-world business problems. 

As customer success champions, we should reevaluate the possibilities in which we can make a difference for our customers. By focusing on our core competencies and using the latest tools in the market, we can look for avenues that can bring effort savings, productivity enhancements, process improvements, workflow optimizations, and business transformations that change the way our customers do business. 

After all, We are GAVS. We aim to galvanize a sense of measurable success through our committed teams and innovative solutions. We should always stride towards delivery excellence and strive for customer success in everything we do.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Patient 360 & Journey Mapping using Graph Technology

Srinivasan Sundararajan

360 Degree View of Patient

With rising demands for quality and cost-effective patient care, healthcare providers are focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. In other words, data-driven healthcare is augmenting human intelligence.

360 Degree View of Patient, as it is called, plays a major role in delivering the required information to the providers. It is a unified view of all the available information about a patient. It could include but is not limited to the following information:

  • Appointments made by the patients
  • Interaction with different doctors
  • Medications prescribed by the doctors
  • Patient’s relationship to other patients within the eco-systems specially to identify the family history related risks
  • Patient’s admission to hospitals or other healthcare facilities
  • Discharge and ongoing care
  • Patient personal wellness activities
  • Patient billing and insurance information
  • Linkages to the same patient in multiple disparate databases within the same hospital
  • Information about a patient’s involvement in various seminars, medical-related conferences, and other events

Limitations of Current Methods

As evident in most hospitals, these information are usually scattered across multiple data sources/databases. Hospitals typically create a data warehouse by consolidating information from multiple resources and try to create a unified database. However, this approach is done using relational databases and the relational databases rely on joining tables across entities to arrive at a complete picture. The RDBMS is not meant to handle relationships which extend to multiple hops and require drilling down to many levels.

Role of Graph Technology & Graph Databases

A graph database is a collection of nodes (or entities typically) and edges (or relationships). A node represents an entity (for example, a person or an organization) and an edge represents a relationship between the two nodes that it connects (for example, friends). Both nodes and edges may have properties associated with them.

While there are multiple graph databases in the market today like, Neo4J, JanusGraph, TigerGraph, the following technical discussions pertain to graph database that is part of SQL server 2019. The main advantage of this approach is that it helps utilize the best RDBMS features wherever applicable, while keeping the graph database options for complex relationships like 360 degree view of patients, making it a true polyglot persistence architecture.

As mentioned above, in SQL Server 2019 a graph database is a collection of node tables and edge tables. A node table represents an entity in a graph schema. An edge table represents a relationship in a graph. Edges are always directed and connect two nodes. An edge table enables users to model many-to-many relationships in the graph. Normal SQL Insert statements are used to create records into both node and edge tables.

While the node tables and edge tables represent the storage of graph data there are some specialized commands which act as extension of SQL and help with traversing between the nodes to get the full details like patient 360 degree data.

MATCH statement

MATCH statement links two node tables through a link table, such that complex relationships can be retrieved. An example,

Data Center Migration Planning Tools

SHORTEST_PATH statement

It finds the relationship path between two node tables by performing multiple hops recursively. It is one of the useful statements to find the 360 degree of a patient.

There are more options and statements as part of graph processing. Together it will help identify complex relationships across business entities and retrieve them.

GRAPH processing In Rhodium  

As mentioned in my earlier articles (Healthcare Data Sharing & Zero Knowledge Proofs in Healthcare Data Sharing), GAVS Rhodium framework enables Patient and Data Management and Patient Data Sharing and graph databases play a major part in providing patient 360 as well as for provider (doctor) credentialing data. The below screen shots show the samples from reference implementation.

Desktop-as-a-Service (DaaS) Solution

Patient Journey Mapping

Typically, a patient’s interaction with the healthcare service provider goes through a cycle of events. The goal of the provider organization is to make this journey smooth and provide the best care to the patients. It should be noted that not all patients go through this journey in a sequential manner, some may start the journey at a particular point and may skip some intermediate journey points. Proper data collection of events behind patient journey mapping will also help with the future prediction of events which will ultimately help with patient care.

Patient 360 data collection plays a major role in building the patient journey mapping. While there could be multiple definitions, the following is one of the examples of mapping between patient 360-degree events and patient journey mapping.

Digital Transformation Services and Solutions

The below diagram shows an example of a patient journey mapping information.

Enterprise IT Support Services USA

Understanding patients better is essential for improving patient outcomes. 360 degree of patients and patient journey mapping are key components for providing such insights. While traditional technologies lack the need of providing those links, graph databases and graph processing will play a major role in patient data management.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Quantum Computing

Vignesh Ramamurthy

Vignesh Ramamurthy

In the MARVEL multiverse, Ant-Man has one of the coolest superpowers out there. He can shrink himself down as well as blow himself up to any size he desires! He was able to reduce to a subatomic size so that he could enter the Quantum Realm. Some fancy stuff indeed.

Likewise, there is Quantum computing. Quantum computers are more powerful than supercomputers and tech companies like Google, IBM, and Rigetti have them.

Google had achieved Quantum Supremacy with its Quantum computer ‘Sycamore’ in 2019. It claims to perform a calculation in 200 seconds which might take the world’s most powerful supercomputer 10,000 years. Sycamore is a 54-qubit computer. Such computers need to be kept under special conditions with temperature being close to absolute zero.

quantum computing

Quantum Physics

Quantum computing falls under a discipline called Quantum Physics. Quantum computing’s heart and soul resides in what we call as Qubits (Quantum bits) and Superposition. So, what are they?

Let’s take a simple example, imagine you have a coin and you spin it. One cannot know the outcome unless it falls flat on a surface. It can either be a head or a tail. However, while the coin is spinning you can say the coin’s state is both heads and tails at the same time (qubit). This state is called Superposition.

So, how do they work and what does it mean?

We know bits are a combination of 0s and 1s (negative or positive states). Qubits have both at the same time. These qubits, in the end, pass through something called “Grover Operator” which washes away all the possibilities, but one.

Hence, from an enormous set of combinations, a single positive outcome remains, just like how Doctor Strange did in the movie Infinity War. However, what is important is to understand how this technically works.

We shall see 2 explanations which I feel could give an accurate picture on the technical aspect of it.

In Quantum Mechanics, the following is as explained by Scott Aaronson, a Quantum scientist from the University of Texas, Austin.

Amplitude – an amplitude of a positive and a negative state. These could also be considered as an amplitude for being 0, and also an amplitude for being 1. The goal for an amplitude here is to make sure that amplitudes leading to wrong answers cancel each other out. Hence this way, amplitude with the right answer remains the only possible outcome.

Quantum computers function using a process called superconductivity. We have a chip the size of an ordinary computer chip. There are little coils of wire in the chip, nearly big enough to see with the naked eye. There are 2 different quantum states of current flowing through these coils, corresponding to 0 and 1, or the superpositions of them.

These coils interact with each other, nearby ones talk to each other and generate a state called an entangled state which is an essential state in Quantum computing. The way qubits interact are completely programmable, so we can send electrical signals to these qubits, and tweak them according to our requirements. This whole chip is placed in a refrigerator with a temperature close to absolute zero. This way superconductivity occurs which makes it to briefly behave as qubits.

Following is the explanation given according to ‘Kurzgesagt — In a Nutshell’, a YouTube channel.

We know a bit is either a 0 or 1. Now, 4 bits mean 0000 and so on. In a qubit, 4 classical bits can be in one of the 2^4 different configurations at once. That is 16 possible combinations out of which we can use just one. 4 qubits in position can be in all those 16 combinations at once.

This grows exponentially with each extra qubit. 20 qubits can hence store a million values in parallel. As seen, these entangled states interact with each other instantly. Hence while measuring one entangled qubit, we can directly deduce the property of its partners.

A normal logic gate gets a simple set of inputs and produces one definite output. A quantum gate manipulates an input of superpositions, rotates probabilities, and produces another set of superpositions as its output.

Hence a quantum computer sets up some qubits, applies quantum gates to entangle them, and manipulates probabilities. Now it finally measures the outcome, collapsing superpositions to an actual sequence of 0s and 1s. This is how we get the entire set of calculations performed at the same time.

What is a Grover Operator?

We now know that while taking one entangled qubit, it is possible to easily deduce properties for all the partners. Grover algorithm works because of these quantum particles being entangled. Since one entangled qubit is able to vouch for the partners, it iterates until it finds the solution with higher degrees of confidence.

What can they do?

As of now, quantum computing hasn’t been implemented in real-life situations just because the world right now doesn’t have such an infrastructure.

Assuming they are efficient and ready to be used. We can make use of it in the following ways: 1) Self-driving cars are picking up pace. Quantum computers can be used on these cars by calculating all possible outcomes on the road. Apart from sensors to reduce accidents, roads consist of traffic signals. A Quantum computer will be able to go through all the possibilities of how traffic signals

function, the time interval, traffic, everything, and feed these self-driving cars with the single best outcome accordingly. Hence, what would result is nothing but a seamless commute with no hassles whatsoever. It’ll be the future as we see in movies.

2) If AI is able to construct a circuit board after having tried everything in the design architecture, this could result in promising AI-related applications.

Disadvantages

RSA encryption is the one that underpins the entire internet. It could breach it and hackers might steal top confidential information related to Health, Defence, personal information, and other sensitive data. At the same time, it could be helpful to achieve the most secure encryption, by identifying the best one amongst every possible encryption. This can be made by finding out the most secure wall to break all the viruses that could infect the internet. If such security is made, it would take a completely new virus to break it. But the chances are very minuscule.

Quantum computing has its share of benefits. However, this would take years to be put to use. Infrastructure and the amount of investment to make is humongous. After all, it could only be used when there are very reliable real-time use cases. It needs to be tested for many things. There is no doubt that Quantum Computing will play a big role in the future. However, with more sophisticated technology, comes more complex problems. The world will take years to be prepared for it.

References:

About the Author –

Vignesh is part of the GAVel team at GAVS. He is deeply passionate about technology and is a movie buff.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Artificial Intelligence in Healthcare

Dr. Ramjan Shaik

Scientific progress is about many small advancements and occasional big leaps. Medicine is no exception. In a time of rapid healthcare transformation, health organizations must quickly adapt to evolving technologies, regulations, and consumer demands. Since the inception of electronic health record (EHR) systems, volumes of patient data have been collected, creating an atmosphere suitable for translating data into actionable intelligence. The growing field of artificial intelligence (AI) has created new technology that can handle large data sets, solving complex problems that previously required human intelligence. AI integrates these data sources to develop new insights on individual health and public health.

Highly valuable information can sometimes get lost amongst trillions of data points, costing the industry around $100 billion a year. Providers must ensure that patient privacy is protected, and consider ways to find a balance between costs and potential benefits. The continued emphasis on cost, quality, and care outcomes will perpetuate the advancement of AI technology to realize additional adoption and value across healthcare. Although most organizations utilize structured data for analysis, valuable patient information is often “trapped” in an unstructured format. This type of data includes physician and patient notes, e-mails, and audio voice dictations. Unstructured data is frequently richer and more multifaceted. It may be more difficult to navigate, but unstructured data can lead to a plethora of new insights. Using AI to convert unstructured data to structured data enables healthcare providers to leverage automation and technology to enhance processes, reduce the staff required to monitor patients while filling gaps in healthcare labor shortages, lower operational costs, improve patient care, and monitor the AI system for challenges.

AI is playing a significant role in medical imaging and clinical practice. Providers and healthcare organizations have recognized the importance of AI and are tapping into intelligence tools. Growth in the AI health market is expected to reach $6.6 billion by 2021 and to exceed $10 billion by 2024.  AI offers the industry incredible potential to learn from past encounters and make better decisions in the future. Algorithms could standardize tests, prescriptions, and even procedures across the healthcare system, being kept up-to-date with the latest guidelines in the same way a phone’s operating system updates itself from time to time.

There are three main areas where AI efforts are being invested in the healthcare sector.

  • Engagement – This involves improvising on how patients interact with healthcare providers and systems.
  • Digitization – AI and other digital tools are expected to make operations more seamless and cost-effective.
  • Diagnostics – By using products and services that use AI algorithms diagnosis and patient care can be improved.

AI will be most beneficial in three other areas namely physician’s clinical judgment and diagnosis, AI-assisted robotic surgery, and virtual nursing assistants.

Following are some of the scenarios where AI makes a significant impact in healthcare:

  • AI can be utilized to provide personalized and interactive healthcare, including anytime face-to-face appointments with doctors. AI-powered chatbots can be powered with technology to review the patient symptoms and recommend whether a virtual consultation or a face-to-face visit with a healthcare professional is necessary.
  • AI can enhance the efficiency of hospitals and clinics in managing patient data, clinical history, and payment information by using predictive analytics. Hospitals are using AI to gather information on trillions of administrative and health record data points to streamline the patient experience. This collaboration of AI and data helps hospitals/clinics to personalize healthcare plans on an individual basis.
  • A taskforce augmented with artificial intelligence can quickly prioritize hospital activity for the benefit of all patients. Such projects can improve hospital admission and discharge procedures, bringing about enhanced patient experience.
  • Companies can use algorithms to scrutinize huge clinical and molecular data to personalize healthcare treatments by developing AI tools that collect and analyze data from genetic sequencing to image recognition empowering physicians in improved patient care. AI-powered image analysis helps in connecting data points that support cancer discovery and treatment.
  • Big data and artificial intelligence can be used in combination to predict clinical, financial, and operational risks by taking data from all the existing sources. AI analyzes data throughout a healthcare system to mine, automate, and predict processes. It can be used to predict ICU transfers, improve clinical workflows, and even pinpoint a patient’s risk of hospital-acquired infections. Using artificial intelligence to mine health data, hospitals can predict and detect sepsis, which ultimately reduces death rates.
  • AI helps healthcare professionals harness their data to optimize hospital efficiency, better engage with patients, and improve treatment. AI can notify doctors when a patient’s health deteriorates and can even help in the diagnosis of ailments by combing its massive dataset for comparable symptoms. By collecting symptoms of a patient and inputting them into the AI platform, doctors can diagnose quickly and more effectively.   
  • Robot-assisted surgeries ranging from minimally-invasive procedures to open-heart surgeries enables doctors to perform procedures with precision, flexibility, and control that goes beyond human capabilities, leading to fewer surgery-related complications, less pain, and a quicker recovery time. Robots can be developed to improve endoscopies by employing the latest AI techniques which helps doctors get a clearer view of a patient’s illness from both a physical and data perspective.

Having understood the advancements of AI in various facets of healthcare, it is to be realized that AI is not yet ready to fully interpret a patient’s nuanced response to a question, nor is it ready to replace examining patients – but it is efficient in making differential diagnoses from clinical results. It is to be understood very clearly that the role of AI in healthcare is to supplement and enhance human judgment, not to replace physicians and staff.

We at GAVS Technologies are fully equipped with cutting edge AI technology, skills, facilities, and manpower to make a difference in healthcare.

Following are the ongoing and in-pipeline projects that we are working on in healthcare:

ONGOING PROJECT:

AI Devops Automation Service Tools

PROJECTS IN PIPELINE:

AIOps Artificial Intelligence for IT Operations
AIOps Digital Transformation Solutions
Best AI Auto Discovery Tools
Best AIOps Platforms Software

Following are the projects that are being planned:

  • Controlling Alcohol Abuse
  • Management of Opioid Addiction
  • Pharmacy Support – drug monitoring and interactions
  • Reducing medication errors in hospitals
  • Patient Risk Scorecard
  • Patient Wellness – Chronic Disease management and monitoring

In conclusion, it is evident that the Advent of AI in the healthcare domain has shown a tremendous impact on patient treatment and care. For more information on how our AI-led solutions and services can help your healthcare enterprise, please reach out to us here.

About the Author –

Dr. Ramjan is a Data Analyst at GAVS. He has a Doctorate degree in the field of Pharmacy. He is passionate about drawing insights out of raw data and considers himself to be a ‘Data Person’.

He loves what he does and tries to make the most of his work. He is always learning something new from programming, data analytics, data visualization to ML, AI, and more.

Center of Excellence – Big Data

The Big Data CoE is a team of experts that experiments and builds various cutting-edge solutions by leveraging the latest technologies, like Hadoop, Spark, Tensor-flow, and emerging open-source technologies, to deliver robust business results. A CoE is where organizations identify new technologies, learn new skills, and develop appropriate processes that are then deployed into the business to accelerate adoption.

Leveraging data to drive competitive advantage has shifted from being an option to a requirement for hyper competitive business landscape. One of the main objectives of the CoE is deciding on the right strategy for the organization to become data-driven and benefit from a world of Big Data, Analytics, Machine Learning and the Internet of Things (IoT).

Cloud Migration Assessment Tool for Business
Triple Constraints of Projects

“According to Chaos Report, 52% of the projects are either delivered late or run over the allocated. The average across all companies is 189% of the original cost estimate. The average cost overrun is 178% for large companies, 182% for medium companies, and 214% for small companies. The average overrun is 222% of the original time estimate. For large companies, the average is 230%; for medium companies, the average is 202%; and for small companies, the average is 239%.”

Big Data CoE plays a vital role in bringing down the cost and reducing the response time to ensure project is delivered on time by helping the organization to build the skillful resources.

Big Data’s Role

Helping the organization to build quality big data applications on their own by maximizing their ability to leverage data. Data engineers are committed to helping ensure the data:

  • define your strategic data assets and data audience
  • gather the required data and put in place new collection methods
  • get the most from predictive analytics and machine learning
  • have the right technology, data infrastructure, and key data competencies
  • ensure you have an effective security and governance system in place to avoid huge financial, legal, and reputational problems.
Cyber Security and Compliance Services

Data Analytics Stages

Architecture optimized building blocks covering all data analytics stages: data acquisition from a data source, preprocessing, transformation, data mining, modeling, validation, and decision making.

Cyber Security Mdr Services

Focus areas

Algorithms support the following computation modes:

  • Batch processing
  • Online processing
  • Distributed processing
  • Stream processing

The Big Data analytics lifecycle can be divided into the following nine stages:

  • Business Case Evaluation
  • Data Identification
  • Data Acquisition & Filtering
  • Data Extraction
  • Data Validation & Cleansing
  • Data Aggregation & Representation
  • Data Analysis
  • Data Visualization
  • Utilization of Analysis Results

A key focus of Big-data CoE is to establish a data-driven organization by developing proof of concept with the latest technologies with Big Data and Machine learning models. As of part of CoE initiatives, we are involved in developing the AI widgets to various market places, such as Azure, AWS, Magento and others. We are also actively involved in engaging and motivating the team to learn cutting edge technologies and tools like Apache Spark and Scala. We encourage the team to approach each problem in a pragmatic way by making them understand the latest architectural patterns over the traditional MVC methods.

It has been established that business-critical decisions supported by data-driven insights have been more successful. We aim to take our organization forward by unleashing the true potential of data!

If you have any questions about the CoE, you may reach out to them at SME_BIGDATA@gavstech.com

CoE Team Members

  • Abdul Fayaz
  • Adithyan CR
  • Aditya Narayan Patra
  • Ajay Viswanath V
  • Balakrishnan M
  • Bargunan Somasundaram
  • Bavya V
  • Bipin V
  • Champa N
  • Dharmeswaran P
  • Diamond Das
  • Inthazamuddin K
  • Kadhambari Manoharan
  • Kalpana Ashokan
  • Karthikeyan K
  • Mahaboobhee Mohamedfarook
  • Manju Vellaichamy
  • Manojkumar Rajendran
  • Masthan Rao Yenikapati
  • Nagarajan A
  • Neelagandan K
  • Nithil Raj Tharammal Paramb
  • Radhika M
  • Ramesh Jayachandar
  • Ramesh Natarajan
  • Ruban Salamon
  • Senthil Amarnath
  • T Mohammed Anas Aadil
  • Thulasi Ram G
  • Vijay Anand Shanmughadass
  • Vimalraj Subash

Center of Excellence – .Net

Best Cyber Security Services Companies

“Maximizing the quality, efficiency, and reusability by providing innovative technical solutions, creating intellectual capital, inculcating best practices and processes to instill greater trust and provide incremental value to the Stakeholders.”

With the above mission,we have embarked on our journey to establish and strengthen the .NET Center of excellence (CoE).

“The only way to do great work is to love what you do.” – Steve Jobs

Expertise in this CoE is drawn from top talent across all customer engagements within GAVS. Team engagement is maintained at a very high level with various connects such as regular technology sessions, advanced trainings for CoE members from MS, support and guidance for becoming a MS MVP. Members also socialize new trending articles, tools, whitepapers and blogs within the CoE team and MS Teams channels setup for collaboration. All communications from MS Premier Communications sent to Gold Partners is also shared within the group. The high-level roadmap as planned for this group is laid out below.

Best DCaas Providers in USA
<!–td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}–>
Best DCaas Providers in USA<!–td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}–>
Best DCaas Providers in USA

The .NET CoEfocused on assistingourcustomers in every stage of theengagement right from on-boarding, planning, execution, technical implementation and finally all the way to launching and growing. Our prescriptive approach is to leverage industry-proven best practices, solutions, reusable components and include robust resources, training, and making a vibrant partner community.

With the above as the primary goal in mind the CoE group is currently engaged inor planning the following initiatives.

Technology Maturity Assessment

One of the main objectivesof this group is to provide constant feedback to all .NET stack project for improvement and improvisation. The goal for this initiative is to build the technology maturity index for all projects for the below parameters.

Best Virtual Desktop Infrastructure Software

Using those approaches within a short span of time we were able to make a significant impact for some of our engagements.

Client – Online Chain Store: Identified cheaper cloud hosting option for application UI.

Benefits: Huge cost and time savings.

Client – Health care sector: Provided alternate solution for DB migrations from DEV to various environments.

Benefits: Huge cost savings due to licensing annually.

Competency Building

“Anyone who stops learning is old, whether at twenty or eighty.” – Henry Ford

Continuous learning and upskilling are the new norms in today’s fast changing technology landscape. This initiative is focused on providing learning and upskilling support to all technology teams in GAVS. Identifying code mentors, supporting team members to become full stack developers are some of the activities planned under this initiative.  Working along with the Learning & Development team,the .NET CoE isformulating different training tracks to upskill the team members and provide support for external assessments and MS certifications.

Solution Accelerators

“Good, better, best. Never let it rest. ‘Till your good is better and your better is best.” – St. Jerome

The primary determinants of CoE effectiveness are involvement in solutions and accelerators and in maintaining standard practices of the relevant technologies across customer engagements across the organization.

As part of this initiative we are focusing on building project templates, DevOps pipelines and automated testing templates for different technology stacks for both Serverless and Server Hosted scenarios. We also are planning similar activities for the Desktop/Mobile Stack with the Multi-Platform App UI (MAUI) framework which is planned to be released for Preview in Q4 2020.

Blockchain Solution and Services

Additionally, we are also adoptingless-code, no-code development platforms for accelerated development cycles for specific use-cases.

As we progress on our journey to strengthen the .NET CoE, we want to act as acatalyst in rapid and early adoption of new technology solutions and work as trusted partners with all our customer and stakeholders.

If you have any questions about the CoE, you may reach out to them at COE_DOTNET@gavstech.com

CoE Team Members

  • Bismillakhan Mohammed
  • Gokul Bose
  • Kirubakaran Girijanandan
  • Neeraj Kumar
  • Prasad D
  • Ramakrishnan S
  • SaphalMalol
  • Saravanan Swaminathan
  • SenthilkumarKamayaswami
  • Sethuraman Varadhan
  • Srinivasan Radhakrishnan
  • Thaufeeq Ahmed
  • Thomas T
  • Vijay Mahalingam

RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.