Quantum Computing

Vignesh Ramamurthy

Vignesh Ramamurthy

In the MARVEL multiverse, Ant-Man has one of the coolest superpowers out there. He can shrink himself down as well as blow himself up to any size he desires! He was able to reduce to a subatomic size so that he could enter the Quantum Realm. Some fancy stuff indeed.

Likewise, there is Quantum computing. Quantum computers are more powerful than supercomputers and tech companies like Google, IBM, and Rigetti have them.

Google had achieved Quantum Supremacy with its Quantum computer ‘Sycamore’ in 2019. It claims to perform a calculation in 200 seconds which might take the world’s most powerful supercomputer 10,000 years. Sycamore is a 54-qubit computer. Such computers need to be kept under special conditions with temperature being close to absolute zero.

quantum computing

Quantum Physics

Quantum computing falls under a discipline called Quantum Physics. Quantum computing’s heart and soul resides in what we call as Qubits (Quantum bits) and Superposition. So, what are they?

Let’s take a simple example, imagine you have a coin and you spin it. One cannot know the outcome unless it falls flat on a surface. It can either be a head or a tail. However, while the coin is spinning you can say the coin’s state is both heads and tails at the same time (qubit). This state is called Superposition.

So, how do they work and what does it mean?

We know bits are a combination of 0s and 1s (negative or positive states). Qubits have both at the same time. These qubits, in the end, pass through something called “Grover Operator” which washes away all the possibilities, but one.

Hence, from an enormous set of combinations, a single positive outcome remains, just like how Doctor Strange did in the movie Infinity War. However, what is important is to understand how this technically works.

We shall see 2 explanations which I feel could give an accurate picture on the technical aspect of it.

In Quantum Mechanics, the following is as explained by Scott Aaronson, a Quantum scientist from the University of Texas, Austin.

Amplitude – an amplitude of a positive and a negative state. These could also be considered as an amplitude for being 0, and also an amplitude for being 1. The goal for an amplitude here is to make sure that amplitudes leading to wrong answers cancel each other out. Hence this way, amplitude with the right answer remains the only possible outcome.

Quantum computers function using a process called superconductivity. We have a chip the size of an ordinary computer chip. There are little coils of wire in the chip, nearly big enough to see with the naked eye. There are 2 different quantum states of current flowing through these coils, corresponding to 0 and 1, or the superpositions of them.

These coils interact with each other, nearby ones talk to each other and generate a state called an entangled state which is an essential state in Quantum computing. The way qubits interact are completely programmable, so we can send electrical signals to these qubits, and tweak them according to our requirements. This whole chip is placed in a refrigerator with a temperature close to absolute zero. This way superconductivity occurs which makes it to briefly behave as qubits.

Following is the explanation given according to ‘Kurzgesagt — In a Nutshell’, a YouTube channel.

We know a bit is either a 0 or 1. Now, 4 bits mean 0000 and so on. In a qubit, 4 classical bits can be in one of the 2^4 different configurations at once. That is 16 possible combinations out of which we can use just one. 4 qubits in position can be in all those 16 combinations at once.

This grows exponentially with each extra qubit. 20 qubits can hence store a million values in parallel. As seen, these entangled states interact with each other instantly. Hence while measuring one entangled qubit, we can directly deduce the property of its partners.

A normal logic gate gets a simple set of inputs and produces one definite output. A quantum gate manipulates an input of superpositions, rotates probabilities, and produces another set of superpositions as its output.

Hence a quantum computer sets up some qubits, applies quantum gates to entangle them, and manipulates probabilities. Now it finally measures the outcome, collapsing superpositions to an actual sequence of 0s and 1s. This is how we get the entire set of calculations performed at the same time.

What is a Grover Operator?

We now know that while taking one entangled qubit, it is possible to easily deduce properties for all the partners. Grover algorithm works because of these quantum particles being entangled. Since one entangled qubit is able to vouch for the partners, it iterates until it finds the solution with higher degrees of confidence.

What can they do?

As of now, quantum computing hasn’t been implemented in real-life situations just because the world right now doesn’t have such an infrastructure.

Assuming they are efficient and ready to be used. We can make use of it in the following ways: 1) Self-driving cars are picking up pace. Quantum computers can be used on these cars by calculating all possible outcomes on the road. Apart from sensors to reduce accidents, roads consist of traffic signals. A Quantum computer will be able to go through all the possibilities of how traffic signals

function, the time interval, traffic, everything, and feed these self-driving cars with the single best outcome accordingly. Hence, what would result is nothing but a seamless commute with no hassles whatsoever. It’ll be the future as we see in movies.

2) If AI is able to construct a circuit board after having tried everything in the design architecture, this could result in promising AI-related applications.

Disadvantages

RSA encryption is the one that underpins the entire internet. It could breach it and hackers might steal top confidential information related to Health, Defence, personal information, and other sensitive data. At the same time, it could be helpful to achieve the most secure encryption, by identifying the best one amongst every possible encryption. This can be made by finding out the most secure wall to break all the viruses that could infect the internet. If such security is made, it would take a completely new virus to break it. But the chances are very minuscule.

Quantum computing has its share of benefits. However, this would take years to be put to use. Infrastructure and the amount of investment to make is humongous. After all, it could only be used when there are very reliable real-time use cases. It needs to be tested for many things. There is no doubt that Quantum Computing will play a big role in the future. However, with more sophisticated technology, comes more complex problems. The world will take years to be prepared for it.

References:

About the Author –

Vignesh is part of the GAVel team at GAVS. He is deeply passionate about technology and is a movie buff.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Healthcare Data Sharing

Srinivasan Sundararajan

Patient Care Redefined

The fight against the novel coronavirus has witnessed transformational changes in the way patient care is defined and managed. Proliferation of telemedicine has enabled consultations across geographies. In the current scenario, access to patients’ medical records has also assumed more importance.

The journey towards a solution also taught us that research on patient data is equally important. More the sample data about the infected patients, the better the vaccine/remedy. However, the growing concern about the privacy of patient data cannot be ignored. Moreover, patients who provide their data for medical research should also benefit from a monetary perspective, for their contributions.

The above facts basically point to the need for being able to share vital healthcare data efficiently so that patient care is improved, and more lives are saved.

The healthcare industry needs a data-sharing framework, which shares patient data but also provides much-needed controls on data ownership for various stakeholders, including the patients.

Types of Healthcare Data

  • PHR (Personal Health Record): An electronic record of health-related information on an individual that conforms to nationally recognized interoperability standards and that can be drawn from multiple sources while being managed, shared, and controlled by the individual.
  • EMR (Electronic Medical Record): Health-related information on an individual that can be created, gathered, managed, and consulted by authorized clinicians and staff within one healthcare organization. 
  • EHR (Electronic Health Record): Health-related information on an individual that conforms to nationally recognized interoperability standards and that can be created, managed and consulted by authorized clinicians and staff across more than one healthcare organization. 

In the context of large multi-specialty hospitals, EMR could also be specific to one specialist department and EHR could be the combination of information from various specialist departments in a single unified record.

Together these 3 forms of healthcare data provide a comprehensive view of a patient (patient 360), thus resulting in quicker diagnoses and personalized quality care.

Current Challenges in Sharing Healthcare Data

  • Lack of unique identity for patients prevents a single version of truth. Though there are government-issued IDs like SSN, their usage is not consistent across systems.
  • High cost and error-prone integration options with provider controlled EMR/EHR systems. While there is standardization with respect to healthcare interoperability API specifications, the effort needed for integration is high.
  • Conflict of interest in ensuring patient privacy and data integrity, while allowing data sharing. Digital ethics dictate that patient consent management take precedence while sharing their data.
  • Monetary benefits of medical research on patient data are not passed on to patients. As mentioned earlier, in today’s context analyzing existing patient information is critical to finding a cure for diseases, but there are no incentives for these patients.
  • Data stewardship, consent management, compliance needs like HIPAA, GDPR. Let’s assume a hospital specializing in heart-related issues shares a patient record with a hospital that specializes in eye care. How do we decide which portions of the patient information is owned by which hospital and how the governance is managed?
  • Lack of real-time information attributing to data quality issues and causing incorrect diagnoses.

The above list is not comprehensive but points to some of the issues that are plaguing the current healthcare data-sharing initiatives.

Blockchain for Healthcare Data Sharing

Some of the basic attributes of blockchain are mentioned below:

  • Blockchain is a distributed database, whereby each node of the database can be owned by a different stakeholder (say hospital departments) and yet all updates to the database eventually converge resulting in a distributed single version of truth.
  • Blockchain databases utilize a cryptography-based transaction processing mechanism, such that each object stored inside the database (say a patient record) can be distinctly owned by a public/private key pair and the ownership rights carry throughout the life cycle of the object (say from patient admission to discharge).
  • Blockchain transactions are carried out using smart contracts which basically attach the business rules to the underlying data, ensuring that the data is always compliant with the underlying business rules, making it even more reliable than the data available in traditional database systems.

These underlying properties of Blockchain make it a viable technology platform for healthcare data sharing, as well as to ensure data stewardship and patient privacy rights.

GAVS Rhodium Framework for Healthcare Data Sharing

GAVS has developed a framework – ‘Rhodium’, for healthcare data sharing.

This framework combines the best features of multi-modal databases (relational, nosql, graph) along with the viability of data sharing facilitated by Blockchain, to come up with a unified framework for healthcare data sharing.

The following are the high-level components (in a healthcare context) of the Rhodium framework. As you can see, each of the individual components of Rhodium play a role in healthcare information exchange at various levels.

GAVS’ Rhodium Framework for Healthcare

GAVS has also defined a maturity model for healthcare organizations for utilizing the framework towards healthcare data sharing. This model defines 4 stages of healthcare data sharing:

  • Within a Hospital 
  • Across Hospitals
  • Between Hospitals & Patients
  • Between Hospitals, Patients & Other Agencies

The below progression diagram illustrates how the framework can be extended for various stages of the life cycle, and typical use cases that are realized in each phase. Detailed explanations of various components of the Rhodium framework, and how it realizes use cases mentioned in the different stages will be covered in subsequent articles in this space.

Rhodium Patient Date Sharing Journey

Benefits of the GAVS Rhodium Framework for Healthcare Data Sharing

The following are the general foreseeable benefits of using the Rhodium framework for healthcare data sharing.

AIOps Digital Transformation Solutions

Healthcare Industry Trends with Respect to Data Sharing

The following are some of the trends we are seeing in Healthcare Data Sharing:

  • Interoperability will drive privacy and security improvements
  • New privacy regulations will continue to come up, in addition to HIPAA
  • The ethical and legal use of AI will empower healthcare data security and privacy
  • The rest of 2020 and 2021 will be defined by the duality of data security and data integration, and providers’ ability to execute on these priorities. That, in turn, will, in many ways, determine their effectiveness
  • In addition to industry regulations like HIPAA, national data privacy standards including Europe’s GDPR, California’s Consumer Privacy Act, and New York’s SHIELD Act will further increase the impetus for providers to prioritize privacy as a critical component of quality patient care

The below documentation from the HIMSS site talks about maturity levels with respect to healthcare interoperability, which is addressed by the Rhodium framework.

Source: https://www.himss.org/what-interoperability

This framework is in its early stages of experimentation and is a prototype of how a Blockchain + Multi-Modal Database powered solution could be utilized for sharing healthcare data, that would be hugely beneficial to patients as well as healthcare providers.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi-Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Center of Excellence – Big Data

The Big Data CoE is a team of experts that experiments and builds various cutting-edge solutions by leveraging the latest technologies, like Hadoop, Spark, Tensor-flow, and emerging open-source technologies, to deliver robust business results. A CoE is where organizations identify new technologies, learn new skills, and develop appropriate processes that are then deployed into the business to accelerate adoption.

Leveraging data to drive competitive advantage has shifted from being an option to a requirement for hyper competitive business landscape. One of the main objectives of the CoE is deciding on the right strategy for the organization to become data-driven and benefit from a world of Big Data, Analytics, Machine Learning and the Internet of Things (IoT).

Cloud Migration Assessment Tool for Business
Triple Constraints of Projects

“According to Chaos Report, 52% of the projects are either delivered late or run over the allocated. The average across all companies is 189% of the original cost estimate. The average cost overrun is 178% for large companies, 182% for medium companies, and 214% for small companies. The average overrun is 222% of the original time estimate. For large companies, the average is 230%; for medium companies, the average is 202%; and for small companies, the average is 239%.”

Big Data CoE plays a vital role in bringing down the cost and reducing the response time to ensure project is delivered on time by helping the organization to build the skillful resources.

Big Data’s Role

Helping the organization to build quality big data applications on their own by maximizing their ability to leverage data. Data engineers are committed to helping ensure the data:

  • define your strategic data assets and data audience
  • gather the required data and put in place new collection methods
  • get the most from predictive analytics and machine learning
  • have the right technology, data infrastructure, and key data competencies
  • ensure you have an effective security and governance system in place to avoid huge financial, legal, and reputational problems.
Cyber Security and Compliance Services

Data Analytics Stages

Architecture optimized building blocks covering all data analytics stages: data acquisition from a data source, preprocessing, transformation, data mining, modeling, validation, and decision making.

Cyber Security Mdr Services

Focus areas

Algorithms support the following computation modes:

  • Batch processing
  • Online processing
  • Distributed processing
  • Stream processing

The Big Data analytics lifecycle can be divided into the following nine stages:

  • Business Case Evaluation
  • Data Identification
  • Data Acquisition & Filtering
  • Data Extraction
  • Data Validation & Cleansing
  • Data Aggregation & Representation
  • Data Analysis
  • Data Visualization
  • Utilization of Analysis Results

A key focus of Big-data CoE is to establish a data-driven organization by developing proof of concept with the latest technologies with Big Data and Machine learning models. As of part of CoE initiatives, we are involved in developing the AI widgets to various market places, such as Azure, AWS, Magento and others. We are also actively involved in engaging and motivating the team to learn cutting edge technologies and tools like Apache Spark and Scala. We encourage the team to approach each problem in a pragmatic way by making them understand the latest architectural patterns over the traditional MVC methods.

It has been established that business-critical decisions supported by data-driven insights have been more successful. We aim to take our organization forward by unleashing the true potential of data!

If you have any questions about the CoE, you may reach out to them at SME_BIGDATA@gavstech.com

CoE Team Members

  • Abdul Fayaz
  • Adithyan CR
  • Aditya Narayan Patra
  • Ajay Viswanath V
  • Balakrishnan M
  • Bargunan Somasundaram
  • Bavya V
  • Bipin V
  • Champa N
  • Dharmeswaran P
  • Diamond Das
  • Inthazamuddin K
  • Kadhambari Manoharan
  • Kalpana Ashokan
  • Karthikeyan K
  • Mahaboobhee Mohamedfarook
  • Manju Vellaichamy
  • Manojkumar Rajendran
  • Masthan Rao Yenikapati
  • Nagarajan A
  • Neelagandan K
  • Nithil Raj Tharammal Paramb
  • Radhika M
  • Ramesh Jayachandar
  • Ramesh Natarajan
  • Ruban Salamon
  • Senthil Amarnath
  • T Mohammed Anas Aadil
  • Thulasi Ram G
  • Vijay Anand Shanmughadass
  • Vimalraj Subash

Center of Excellence – Database

Data Center as a Service Providers in USA

“During the World War II, there was a time when the Germans winning on every front and the fear of Hitler taking over the world was looming. At that point in time, had the Allies not taken drastic measures and invested in ground-breaking technologies such as radars, aircraft, atomic energy, etc., the world would have been starkly different from what it is today.

Even in today’s world, the pace at which things are changing is incredible. The evolution of technology is unstoppable, and companies must be ready. There is an inherent need for them to differentiate themselves by providing solutions that showcase a deep understanding of domain and technology to address evolving customer expectations. What becomes extremely important for companies is to establish themselves as incubators of innovation and possess the ability to constantly innovate and fail fast. Centers of Excellence can be an effective solution to address these challenges.

“An Organisation’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage”

  • Jack Welch, former Chairman and CEO of General Electric

The Database CoE was formed with a mission to groom, enhance and incubate talents within GAVS to stay abreast of the evolving technology landscape and help our customers with cutting edge technology solutions.

We identify the expert and the requirements across all customer engagements within GAVS. Regular connects and technology sessions ensure everyone in the CoE is learning at least one new topic in a week. Below is our charter and roadmap by priority:

Data Center Consolidation Initiative Services

Data Center Migration Planning Tools

Database CoE is focused on assisting our customers in every stage of the engagement right from on-boarding, planning, execution with consultative approach and a futuristic mindset. With above primary goals we are currently working on below initiatives:

Competency Building

When we help each other and stand together we evolve to be the strongest.

Continuous learning is an imperative in the current times. Our fast-paced trainings on project teams is an alternate to the primitive classroom sessions. We believe true learning happen when you are working on it hands-on. With this key aspect in mind, we divide the teams in smaller groups and map them to projects to get larger exposure and gain from experience.

This started off with a pilot with an ISP provider where we trained 4 CoE members in Azure and Power BI within a span of 2 months.

Desktop-as-a-Service (DaaS) Solution

Database Maturity Assessment

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly “

  • George Westerman, research scientist at the MIT Center for Digital Business

Why Bother with a Database Assessment?

We often know we have a problem and can visualize the ideal state we want our technology solution to get us to.  However, it is challenging to figure out how to get there because it’s easy to confuse the symptoms with the cause of a problem. Thus, you end up solving the ‘symptom’ with a (potentially expensive) piece of technology that is ill-equipped to address the underlying cause.

We offer a structured process to assess your current database estate and select a technology solution helps you get around this problem, reduce risks and fast track the path to your true objective with futureproofing, by forcing you to both identify the right problem and solve it the right way.

Assessment Framework

Digital Service Desk AI Software

Below are the three key drivers powering the assessment.

Accelerated Assessment:

  • Automated assessment and benchmark of existing and new database estates against industry best practices and standards.
  • Analyze & Finetune
    • Analyze assessment findings and implement recommendations on performance, consistency, and security aspect
  • NOC+ZERO TOUCH L2
    • Shift Left and Automate L1/L2 Service requests and incidents with help of Database COE- Automation experts

As we progress on our journey, we want to establish ourselves as a catalyst to help our customers future-proof technology and help in early adoption of new solutions seamlessly.

If you have any questions about the CoE, you may reach out to them at COE_DATABASE@gavstech.com

CoE Team Members

  • Ashwin Kumar K
  • Ayesha Yasmin
  • Backiyalakshmi M
  • Dharmeswaran P
  • Gopinathan Sivasubramanian
  • Karthikeyan Rajasekaran
  • Lakshmi Kiran  
  • Manju Vellaichamy  
  • Manjunath Kadubayi  
  • Nagarajan A  
  • Nirosha Venkatesalu  
  • Praveen kumar Ralla  
  • Praveena M  
  • Rajesh Kumar Reddy Mannuru  
  • Satheesh Kumar K  
  • Sivagami R  
  • Subramanian Krishnan
  • Venkatesh Raghavendran

RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.

Observability versus Monitoring

Sri Chaganty

“Observability” has become a key trend in Service Reliability Engineering practice.  One of the recommendations from Gartner’s latest Market Guide for IT Infrastructure Monitoring Tools released in January 2020 says, “Contextualize data that ITIM tools collect from highly modular IT architectures by using AIOps to manage other sources, such as observability metrics from cloud-native monitoring tools.”

Like so many other terms in software engineering, ‘observability’ is a term borrowed from an older physical discipline: in this case, control systems engineering. Let me use the definition of observability from control theory in Wikipedia: “observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.”

Observability is gaining attention in the software world because of its effectiveness at enabling engineers to deliver excellent customer experiences with software despite the complexity of the modern digital enterprise.

When we blew up the monolith into many services, we lost the ability to step through our code with a debugger: it now hops the network.  Monitoring tools are still coming to grips with this seismic shift.

How is observability different than monitoring?

Monitoring requires you to know what you care about before you know you care about it. Observability allows you to understand your entire system and how it fits together, and then use that information to discover what specifically you should care about when it’s most important.

Monitoring requires you to already know what normal is. Observability allows discovery of different types of ‘normal’ by looking at how the system behaves, over time, in different circumstances.

Monitoring asks the same questions over and over again. Is the CPU usage under 80%? Is memory usage under 75% percent? Or, is the latency under 500ms? This is valuable information, but monitoring is useful for known problems.

Observability, on the other side, is about asking different questions almost all the time. You discover new things.

Observability allows the discovery of different types of ‘normal’ by looking at behavior, over time, in different circumstances.

Metrics do not equal observability.

What Questions Can Observability Answer?

Below are sample questions that can be addressed by an effective observability solution:

  • Why is x broken?
  • What services does my service depend on — and what services are dependent on my service?
  • Why has performance degraded over the past quarter?
  • What changed? Why?
  • What logs should we look at right now?
  • What is system performance like for our most important customers?”
  • What SLO should we set?
  • Are we out of SLO?
  • What did my service look like at time point x?
  • What was the relationship between my service and x at time point y?
  • What was the relationship of attributed across the system before we deployed? What’s it like now?
  • What is most likely contributing to latency right now? What is most likely not?
  • Are these performance optimizations on the critical path?

About the Author –

Sri is a Serial Entrepreneur with over 30 years’ experience delivering creative, client-centric, value-driven solutions for bootstrapped and venture-backed startups.

Business with a Heart

Balaji Uppili

People and technology are converging like never before, as the world is gripped by COVID – 19. Just a few months ago, nobody could have predicted or foreseen the way businesses are having to work today.  As we were strategizing on corporate governance, digital transformation and the best of resiliency plans to ensure business continuity, no one ever anticipated the scale and enormity of COVID 19.

Today, it has become obvious that COVID 19 has brought about the convergence of technology and humanity and how it can change the way businesses work and function.  While we as leaders have been thinking largely about business outcomes, this pandemic has triggered a more humane approach, and the approach is here to stay.  The humane approach will be the differentiator and will prove the winner.

There is no doubt that this pandemic has brought an urgent need to accelerate our digital capabilities. With the focus on strong IT infrastructure and remote working, workforces were able to transition to working from home, meeting through video conferencing, and surprisingly, this has turned to increase the humane aspect of business relations – it has now become alright for both parties to be seeing children, spouses or pets in meeting backgrounds, and that in itself has broken down huge barriers and formalities.  It is refreshing to see the emerging empathy that is getting stronger with every meeting, and increasing collaboration and communication. It is becoming increasingly clear that we have overlooked the important factor of how it is that people have been showing up to work.  Suddenly it is now more visible that people have equally strong roles within the family – when we see parents having to home-school their children, or having other care obligations, we are viewing their personal lives and are able to empathize with them more.  We are seeing the impact that business can have on people and their personal lives and this is a never like before opportunity for leaders to put our people first.

And with customers being the center of every business, the situation of not being able to do in-person meetings has now warranted newer ways to collaborate and further strengthen the customer-centricity initiatives even more.  It has become evident that no matter how much we as leaders are thinking of automating operations, it is human connections that run businesses successfully. Lots of things have been unraveled – Important business imperatives like criticality of clean workspace compliance, the fact that offshoring thousands of miles away is not factually a compromise, but a very cost-effective and efficient way of getting things done. Productivity has also increased, and work done this far by, has a positive impact of at least 20% or even more in certain situations. As boundaries and barriers are broken, the rigidities of who should work on something and when they should work on it have all become less rigid.  Employees are less regimental about time.  Virtual crowd outsourcing has become the norm – you throw an idea at a bunch of people and whoever has the ability and the bandwidth to handle the task takes care of it, instead of a formal task assignment, and this highlights the fungibility of people.

All in all, the reset in the execution processes and introducing much more of a humane approach is here to stay and make the new norm even more exciting.

About the Author –

Balaji has over 25 years of experience in the IT industry, across multiple verticals. His enthusiasm, energy, and client focus is a rare gift, and he plays a key role in bringing new clients into GAVS. Balaji heads the Delivery department and passionately works on Customer delight. He says work is worship for him and enjoys watching cricket, listening to classical music, and visiting temples.

Hyperautomation

Machine learning service provider

Bindu Vijayan

According to Gartner, “Hyper-automation refers to an approach in which organizations rapidly identify and automate as many business processes as possible. It involves the use of a combination of technology tools, including but not limited to machine learning, packaged software and automation tools to deliver work”.  Hyper-automation is to be among the year’s top 10 technologies, according to them.

It is expected that by 2024, organizations will be able to lower their operational costs by 30% by combining hyper-automation technologies with redesigned operational processes. According to Coherent Market Insights, “Hyper Automation Market will Surpass US$ 23.7 Billion by the end of 2027.  The global hyper automation market was valued at US$ 4.2 Billion in 2017 and is expected to exhibit a CAGR of 18.9% over the forecast period (2019-2027).”

How it works

To put it simply, hyper-automation uses AI to dramatically enhance automation technologies to augment human capabilities. Given the spectrum of tools it uses like Robotic Process Automation (RPA), Machine Learning (ML), and Artificial Intelligence (AI), all functioning in sync to automate complex business processes, even those that once called for inputs from SMEs,  implies this is a powerful tool for organisations in their digital transformation journey.

Hyperautomation allows for robotic intelligence into the traditional automation process, and enhances the completion of processes to make it more efficient, faster and errorless.  Combining AI tools with RPA, the technology can automate almost any repetitive task; it automates the automation by identifying business processes and creates bots to automate them. It calls for different technologies to be leveraged, and that means the businesses investing in it should have the right tools, and the tools should be interoperable. The main feature of hyperautomation is, it merges several forms of automation and works seamlessly together, and so a hyperautomation strategy can consist of RPA, AI, Advanced Analytics, Intelligent Business Management and so on. With RPA, bots are programmed to get into software, manipulate data and respond to prompts. RPA can be as complex as handling multiple systems through several transactions, or as simple as copying information from applications. Combine that with the concept of Process Automation or Business Process Automation which enables the management of processes across systems, it can help streamline processes to increase business performance.    The tool or the platform should be easy to use and importantly scalable; investing in a platform that can integrate with the existing systems is crucial. The selection of the right tools is what  Gartner calls “architecting for hyperautomation.”

Impact of hyperautomation

Hyperautomation has a huge potential for impacting the speed of digital transformation for businesses, given that it automates complex work which is usually dependent on inputs from humans. With the work moved to intelligent digital workers (RPA with AI) that can perform repetitive tasks endlessly, human performance is augmented. These digital workers can then become real game-changers with their efficiency and capability to connect to multiple business applications, discover processes, work with voluminous data, and analyse in order to arrive at decisions for further / new automation.

The impact of being able to leverage previously inaccessible data and processes and automating them often results in the creation of a digital twin of the organization (DTO); virtual models of every physical asset and process in an organization.  Sensors and other devices monitor digital twins to gather vital information on their condition, and insights are gathered regarding their health and performance. As with data, the more data there is, the systems get smarter with it, and are able to provide sharp insights that can thwart problems, help businesses make informed decisions on new services/products, and in general make informed assessments. Having a DTO throws light on the hitherto unknown interactions between functions and processes, and how they can drive value and business opportunities.  That’s powerful – you get to see the business outcome it brings in as it happens or the negative effect it causes, that sort of intelligence within the organization is a powerful tool to make very informed decisions.

Hyperautomation is the future, an unavoidable market state

hyperautomation is an unavoidable market state in which organizations must rapidly identify and automate all possible business processes.” – Gartner

It is interesting to note that some companies are coming up with no-code automation. Creating tools that can be easily used even by those who cannot read or write code can be a major advantage – It can, for e.g., if employees are able to automate the multiple processes that they are responsible for, hyperautomation can help get more done at a much faster pace, sparing time for them to get involved in planning and strategy.  This brings more flexibility and agility within teams, as automation can be managed by the teams for the processes that they are involved in.

Conclusion

With hyperautomation, it would be easy for companies to actually see the ROI they are realizing from the amount of processes that have been automated, with clear visibility on the time and money saved. Hyperautomation enables seamless communication between different data systems, to provide organizations flexibility and digital agility. Businesses enjoy the advantages of increased productivity, quality output, greater compliance, better insights, advanced analytics, and of course automated processes. It allows machines to have real insights on business processes and understand them to make significant improvements.

“Organizations need the ability to reconfigure operations and supporting processes in response to evolving needs and competitive threats in the market. A hyperautomated future state can only be achieved through hyper agile working practices and tools.”  – Gartner

References:

Customer Centricity during Unprecedented Times

Cloud service for business

Balaji Uppili

“Revolve your world around the customer and more customers will revolve around you.”

Heather Williams

Customer centricity lies at the heart of GAVS. An organization’s image is largely the reflection of how well its customers are treated. And unprecedented times demand unprecedented measures to ensure that our customers are well-supported. We conversed with our Chief Customer Success Officer, Balaji Uppili, to understand the pillars/principles of maintaining and improving an organization’s customer-centricity amidst a global emergency.

Helping keep the lights on

Keeping the lights on – this forms the foundation of all organizations. It is of utmost importance to extend as much support as required by the customers to ensure their business as usual remains unaffected. Keeping a real-time pulse on the evolving requirements and expectations of our customers will go a long way. It is impossible to understate the significance of continuous communication and collaboration here. Our job doesn’t end at deploying collaboration tools, we must also measure its effectiveness and take necessary corrective actions.

The lack of a clear vision into the future may lead business leaders into making not-so-sound decisions. Hence, bringing an element of ‘proactiveness’ into the equation will go a long way in assuring the customers of having invested in the right partner.

Being Empathy-driven

While empathy has always been a major tenet of customer-centricity, it is even more important in these times. The crisis has affected everyone, some more than others, and in ways, we couldn’t have imagined. Thus, we must drive all our conversations with empathy. The way we deal with our customers in a crisis is likely to leave lasting impressions in their minds.

Like in any relationship, we shouldn’t shy away from open and honest communication. It is also important to note that all rumours should be quelled by pushing legitimate information to our customers regularly. Transparency in operations and compassion in engagements will pave the path for more profound and trusted relationships.

Innovating for necessity and beyond

It is said that “Necessity is the mother of invention”. We probably haven’t faced a situation in the recent past that necessitated invention as much as it does now!

As we strive to achieve normalcy, we should take up this opportunity to innovate solutions. Solutions that are not just going to help our customers adjust to the new reality, but arm them with a more efficient way of achieving their desired outcomes. Could the new way of working be the future standard? Is the old way worth going back to? This is the apt time to answers these questions and reimagines our strategies.

Our deep understanding of our customers holds the key to helping them in meaningful ways. This should be an impetus for us to devise ways of delivering more value to our customers.

General Principles

With rapidly evolving situations and uncertainty, it is easy to fall prey to misinformation and rumours. Hence, it is crucial to keep a channel of communication open between you and your customers and share accurate information. We should be listening to our customers and be extra perceptive to their needs, whether they are articulated or not. Staying ahead and staying positive should be our mantras to swear by. The new barometer of customer experience will be how their partners/vendors meet their new needs with care and concern.

Over-communicating is not something we should shy away from. We should be constantly communicating with our customers to reassure them of our resolve to stand by them. Again, it is an absolute must to adjust our tone and not plug in any ‘sales-ly’ messages.

It is easy to lose focus on long-term goals and just concentrate on near-term survival. This may not be the best strategy if we’re looking to stay afloat after all this is over. All decisions must be data-driven or outcome-driven. Reimagining and designing newer ways of delivering value and ensuring customer success will be the true test of enterprises in the near future.

We’re looking at uncertain times ahead. It is imperative to build resilience to such disruptions. One way would be customer-centricity – we should be relentless in our pursuit of understanding, connecting with, and delighting our customers. Resilience is going to be as important as cost and efficiency in a business.

About the Author:
Balaji has over 25 years of experience in the IT industry, across multiple verticals. His enthusiasm, energy and client focus is a rare gift, and he plays a key role in bringing new clients into GAVS. Balaji heads the Delivery department and passionately works on Customer delight. He says work is worship for him, and enjoys watching cricket, listening to classical music and visiting temples.