Quantum Computing

Vignesh Ramamurthy

Vignesh Ramamurthy

In the MARVEL multiverse, Ant-Man has one of the coolest superpowers out there. He can shrink himself down as well as blow himself up to any size he desires! He was able to reduce to a subatomic size so that he could enter the Quantum Realm. Some fancy stuff indeed.

Likewise, there is Quantum computing. Quantum computers are more powerful than supercomputers and tech companies like Google, IBM, and Rigetti have them.

Google had achieved Quantum Supremacy with its Quantum computer ‘Sycamore’ in 2019. It claims to perform a calculation in 200 seconds which might take the world’s most powerful supercomputer 10,000 years. Sycamore is a 54-qubit computer. Such computers need to be kept under special conditions with temperature being close to absolute zero.

quantum computing

Quantum Physics

Quantum computing falls under a discipline called Quantum Physics. Quantum computing’s heart and soul resides in what we call as Qubits (Quantum bits) and Superposition. So, what are they?

Let’s take a simple example, imagine you have a coin and you spin it. One cannot know the outcome unless it falls flat on a surface. It can either be a head or a tail. However, while the coin is spinning you can say the coin’s state is both heads and tails at the same time (qubit). This state is called Superposition.

So, how do they work and what does it mean?

We know bits are a combination of 0s and 1s (negative or positive states). Qubits have both at the same time. These qubits, in the end, pass through something called “Grover Operator” which washes away all the possibilities, but one.

Hence, from an enormous set of combinations, a single positive outcome remains, just like how Doctor Strange did in the movie Infinity War. However, what is important is to understand how this technically works.

We shall see 2 explanations which I feel could give an accurate picture on the technical aspect of it.

In Quantum Mechanics, the following is as explained by Scott Aaronson, a Quantum scientist from the University of Texas, Austin.

Amplitude – an amplitude of a positive and a negative state. These could also be considered as an amplitude for being 0, and also an amplitude for being 1. The goal for an amplitude here is to make sure that amplitudes leading to wrong answers cancel each other out. Hence this way, amplitude with the right answer remains the only possible outcome.

Quantum computers function using a process called superconductivity. We have a chip the size of an ordinary computer chip. There are little coils of wire in the chip, nearly big enough to see with the naked eye. There are 2 different quantum states of current flowing through these coils, corresponding to 0 and 1, or the superpositions of them.

These coils interact with each other, nearby ones talk to each other and generate a state called an entangled state which is an essential state in Quantum computing. The way qubits interact are completely programmable, so we can send electrical signals to these qubits, and tweak them according to our requirements. This whole chip is placed in a refrigerator with a temperature close to absolute zero. This way superconductivity occurs which makes it to briefly behave as qubits.

Following is the explanation given according to ‘Kurzgesagt — In a Nutshell’, a YouTube channel.

We know a bit is either a 0 or 1. Now, 4 bits mean 0000 and so on. In a qubit, 4 classical bits can be in one of the 2^4 different configurations at once. That is 16 possible combinations out of which we can use just one. 4 qubits in position can be in all those 16 combinations at once.

This grows exponentially with each extra qubit. 20 qubits can hence store a million values in parallel. As seen, these entangled states interact with each other instantly. Hence while measuring one entangled qubit, we can directly deduce the property of its partners.

A normal logic gate gets a simple set of inputs and produces one definite output. A quantum gate manipulates an input of superpositions, rotates probabilities, and produces another set of superpositions as its output.

Hence a quantum computer sets up some qubits, applies quantum gates to entangle them, and manipulates probabilities. Now it finally measures the outcome, collapsing superpositions to an actual sequence of 0s and 1s. This is how we get the entire set of calculations performed at the same time.

What is a Grover Operator?

We now know that while taking one entangled qubit, it is possible to easily deduce properties for all the partners. Grover algorithm works because of these quantum particles being entangled. Since one entangled qubit is able to vouch for the partners, it iterates until it finds the solution with higher degrees of confidence.

What can they do?

As of now, quantum computing hasn’t been implemented in real-life situations just because the world right now doesn’t have such an infrastructure.

Assuming they are efficient and ready to be used. We can make use of it in the following ways: 1) Self-driving cars are picking up pace. Quantum computers can be used on these cars by calculating all possible outcomes on the road. Apart from sensors to reduce accidents, roads consist of traffic signals. A Quantum computer will be able to go through all the possibilities of how traffic signals

function, the time interval, traffic, everything, and feed these self-driving cars with the single best outcome accordingly. Hence, what would result is nothing but a seamless commute with no hassles whatsoever. It’ll be the future as we see in movies.

2) If AI is able to construct a circuit board after having tried everything in the design architecture, this could result in promising AI-related applications.

Disadvantages

RSA encryption is the one that underpins the entire internet. It could breach it and hackers might steal top confidential information related to Health, Defence, personal information, and other sensitive data. At the same time, it could be helpful to achieve the most secure encryption, by identifying the best one amongst every possible encryption. This can be made by finding out the most secure wall to break all the viruses that could infect the internet. If such security is made, it would take a completely new virus to break it. But the chances are very minuscule.

Quantum computing has its share of benefits. However, this would take years to be put to use. Infrastructure and the amount of investment to make is humongous. After all, it could only be used when there are very reliable real-time use cases. It needs to be tested for many things. There is no doubt that Quantum Computing will play a big role in the future. However, with more sophisticated technology, comes more complex problems. The world will take years to be prepared for it.

References:

About the Author –

Vignesh is part of the GAVel team at GAVS. He is deeply passionate about technology and is a movie buff.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Design-led Organization: Creative Thinking as a Practice!

Gogul R G

This is the first article in the series of ‘Design-led organization’ writing about creative thinking as a practice in GAVS. It is the first step for the readers to explore the world of design and creativity. So, let’s get started!

First let’s see what is design thinking is all about

There is a common misconception that design thinking is new. But when you look back, people have applied a human-centric creative process to build meaningful and effective solutions. Design has been practiced for ages to build monuments, bridges, automobiles, subway systems, etc. Design is not only limited to aesthetics, it is more of a mindset to think of a solution. Design thinking is a mindset to iteratively think about a complex problem and come up with a viable solution

Thinking outside of the box can provide an innovative solution to a sticky problem. However, thinking outside of the box can be a real challenge as we naturally develop patterns of thinking that are based on the repetitive activities and commonly accessed knowledge surround ourselves. It takes something to detach away from a situation where we’re too closely involved to be able to find better possibilities.

To illustrate how a fresh way of thinking can create unexpectedly good solutions, let’s look at a famous incident. Some years ago, an incident occurred where a truck driver had tried to pass under a low bridge. But, he failed, and the truck became firmly lodged under the bridge.

IT Infrastructure Managed Services

The driver was unable to continue driving through or reverse out. The struck truck caused massive traffic problems, which resulted in emergency personnel, engineers, firefighters, and truck drivers gathering to negotiate various solutions to dislodge the truck.

Emergency workers were debating whether to dismantle parts of the truck or chip away at parts of the bridge. Each of one were looking for a solution with their respective level of expertise. A boy walking by and witnessing the intense debate looked at the truck, at the bridge, then looked at the road and said, “Why not just let the air out of the tires?” to the absolute amazement of all the specialists and experts trying to resolve the issue.

When the solution was tested, the truck could drive with ease, having suffered only the damage caused by its initial attempt to pass underneath the bridge. It symbolizes the struggles we face where often the most obvious solutions are the ones hardest to come by because of the self-imposed constraints we work within.  

“Challenging our assumptions and everyday knowledge is often difficult for us humans, as we rely on building patterns of thinking in order not to have to learn everything from scratch every time.

Let’s come back to our topic “What is Design thinking?” Tim Brown, Executive Chairman of IDEO – an international design and consulting firm quoted design thinking as below.

“Design thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.

Now let’s think about our truck example. A boy with his fresh mindset provides a simple solution to address a complex problem. Yeah! this is the sweet spot. Everyone is creative and capable of thinking like a designer, and out of the box, to come up with a solution. This way of inculcating design as a mindset for a solution is known as Design thinking.

Yes, you read it right, everyone is creative…

We forget that back in kindergarten, we were all creative. We all played and experimented with weird things without fear or shame. We didn’t know enough not to. The fear of social rejection is something we learned as we got older. And that’s why it’s possible to regain our creative abilities, even decades later. In the field of design and user experience, there are individuals to stick with a methodology a while, they will end up doing amazing things. They come up with break through ideas or suggestions and work creatively with a team to develop something truly innovative. They surprise themselves with the realization that they are a lot more creative than they had thought. That early success shakes up how they see themselves and makes them eager to do more.

We just need to rediscover what we already have: the capacity to imagine, or build upon, new to the world ideas.  But the real value of creativity doesn’t emerge until you are brave enough to act on those ideas.

Geshe Thupten Jinpa, who has been the Dalai Lama’s chief English translator for more than twenty years, shared an insight about the nature of creativity. Jinpa pointed out that there’s no word in the Tibetan language for ‘creativity’ or ‘being creative’. The closest translation is ‘natural’. In other words, if you want to be more creative, you should be more natural! So…be natural!

At your workplace, the complex problems can be easily sorted out when you find a solution using creativity with the mindset of design thinking. Creativity can be improved by following the below steps.

  1. Go for a walk.
  2. Play your favorite games.
  3. Move your eyes.
  4. Take a break and enjoy yourself.
  5. Congratulate yourself each time you do something well.
  6. Estimate time, distance, and money.
  7. Take a route you never have taken before.
  8. Look for images in mosaics, patterns, textures, clouds, stars…
  9. Try something you have never done before.
  10. Do a creative exercise.
  11. Start a collection (stamps, coins, art, stationery, anything you wish to collect)
  12. Watch Sci-Fi or fantasy films.
  13. Change the way you do things – there are no routine tasks, only routine way of doing things.
  14. Wear a color you do not like.
  15. Think about how they invented equipment or objects you use daily.
  16. Make a list of 10 things you think are impossible to do and then imagine how you could make each one possible.
  17. For every bad thing that happens to you, remember at least 3 good things that happened.
  18. Read something you have not read yet.
  19. Make friends with people on the other side of the world.
  20. When you have an idea, make a note of it, and later check to see if it happened.
  21. Connect a sport with your work.
  22. Try food you never tried before.
  23. Talk to grandparents and relatives and listen to their stories.
  24. Give an incorrect answer to a question.
  25. Find links between people, things, ideas, or facts.
  26. Ask children how to do something and observe their creativity.

Start doing the above-mentioned steps to inculcate a creative mindset and apply it in your day-to-day work. Companies like GE health care, Procter & Gamble, UBER practiced design thinking and implemented in their new product launches and for solving complex problems in their organizations. Be natural to be more creative! When you are more creative, you can apply design thinking for seeking any solution for a complex problem in your work.

This is the first article in the series of Design led Organization in GAVS. Keep watching this space for more articles on design and keep exploring the world of design-thinking!

References:

About the Author –

Gogul is a passionate UX designer with 8+ years of experience into designing experiences for digital channels like Enterprise apps, B2C, B2B apps, Mobile apps, Kiosk, Point of Sale, Endless aisle, telecom products. He is passionate about transforming complex problems into actionable solutions using design.

Center of Excellence – Java

The Java CoE was established to partner with our customers and aid them in realizing business benefits through effective adoption of cutting-edge technologies; thus, enabling customer success.

Objectives

  • Be the go-to team for anything related to Java across the organization and customer engagements.
  • Build competency by conducting training and mentoring sessions, publishing blogs, whitepapers and participating in Hackathons.
  • Support presales team in creating proposals by providing industry best solutions using the latest technologies, standards & principles.
  • Contribute a certain percent of revenue growth along with the CSMs.
  • Create reusable artifacts, frameworks, solutions and best practices which can be used across organization to improve delivery quality.

Focus Areas

  1. Design Thinking: Setting up a strong foundation of “Design Thinking and Engineering Mindset” is paramount for any business. We aim to do so in the following way:
IT Infrastructure Managed Services

2. Solution and Technology: Through our practice, we aim to equip GAVS with solution-oriented technology leaders who can lead us ahead through disruptive times

IT Operations Management Software

3. Customer success

  • Identify opportunities in accounts based on the collaboration with CSMs, understand customer needs, get details about the engagement, understand the focus areas and challenges.
  • Understand the immediate need of the project, provide solution to address the need.
  • Java council to help developers arrive at solutions.
  • Understand architecture in detail and provide recommendation / create awareness to use new technologies
  • Enforce a comprehensive review process to enable quality delivery.

Accomplishments

  • Formed the CoE team
  • Identified the focus Areas
  • Identified leads for every stream
  • Socialized the CoEwithin GAVS
  • Delivered effective solutions across projects to improve delivery quality
  • Conducted trainings on standards and design-oriented coding practices across GAVS
  • Publishedblogs to bring in design-oriented development practices
  • Identified the areas for creating re-usable artefacts (Libraries / Frameworks)
  • Brainstormed and finalized the design for creating Frameworks (For the identified areas)
  • Streamlined the DevOps process which can be applied in any engagement
  • Built reusable libraries, components and frameworks which can be used across GAVS
  • Automated the Code Review process
  • Organized and conducted hackathons and tech meetups
  • Discovered potential technical problems/challenges across teams and offered effective solutions, thereby enabling customer success
  • Supported the presales team in creating customized solutions for prospects

Upcoming Activities

  • Establishing tech governance and align managers / tech leads to the process
  • Setting up security standards and principles across domain
  • Buildingmore reusable libraries, components and frameworks which can be used across GAVS
  • Adopting Design Patterns / Anti-patterns
  • Enforcing a strong review process to bring in quality delivery
  • Enabling discussions with the customers
  • Setting up a customer advisory team

Contribution to Organizational Growth

As we continue our journey, we aim to support the revenue growth of our organization. Customer Success being a key goal of GAVS, we will continue to enable it by improving the quality of service delivery and building a solid foundation across all technology and process streams. We also want to contribute to the organization by developing a core competency around a strategic capability and reduce knowledge management risks.

If you have any questions about the CoE, you may reach out to them at COE_JAVA@gavstech.com

CoE Team Members

  • Lakshminarasimhan J
  • Muraleedharan Vijayakumar
  • Bipin V
  • Meenakshi Sundaram
  • Mahesh Rajakumar M
  • Ranjith Joseph Selvaraj
  • Jagathesewaren K
  • Sivakumar Krishnasamy
  • Vijay Anand Shanmughadass
  • Sathya Selvam
  • Arun Kumar Ananthanarayanan
  • John Kalvin Jesudhason

Center of Excellence – Database

Data Center as a Service Providers in USA

“During the World War II, there was a time when the Germans winning on every front and the fear of Hitler taking over the world was looming. At that point in time, had the Allies not taken drastic measures and invested in ground-breaking technologies such as radars, aircraft, atomic energy, etc., the world would have been starkly different from what it is today.

Even in today’s world, the pace at which things are changing is incredible. The evolution of technology is unstoppable, and companies must be ready. There is an inherent need for them to differentiate themselves by providing solutions that showcase a deep understanding of domain and technology to address evolving customer expectations. What becomes extremely important for companies is to establish themselves as incubators of innovation and possess the ability to constantly innovate and fail fast. Centers of Excellence can be an effective solution to address these challenges.

“An Organisation’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage”

  • Jack Welch, former Chairman and CEO of General Electric

The Database CoE was formed with a mission to groom, enhance and incubate talents within GAVS to stay abreast of the evolving technology landscape and help our customers with cutting edge technology solutions.

We identify the expert and the requirements across all customer engagements within GAVS. Regular connects and technology sessions ensure everyone in the CoE is learning at least one new topic in a week. Below is our charter and roadmap by priority:

Data Center Consolidation Initiative Services

Data Center Migration Planning Tools

Database CoE is focused on assisting our customers in every stage of the engagement right from on-boarding, planning, execution with consultative approach and a futuristic mindset. With above primary goals we are currently working on below initiatives:

Competency Building

When we help each other and stand together we evolve to be the strongest.

Continuous learning is an imperative in the current times. Our fast-paced trainings on project teams is an alternate to the primitive classroom sessions. We believe true learning happen when you are working on it hands-on. With this key aspect in mind, we divide the teams in smaller groups and map them to projects to get larger exposure and gain from experience.

This started off with a pilot with an ISP provider where we trained 4 CoE members in Azure and Power BI within a span of 2 months.

Desktop-as-a-Service (DaaS) Solution

Database Maturity Assessment

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly “

  • George Westerman, research scientist at the MIT Center for Digital Business

Why Bother with a Database Assessment?

We often know we have a problem and can visualize the ideal state we want our technology solution to get us to.  However, it is challenging to figure out how to get there because it’s easy to confuse the symptoms with the cause of a problem. Thus, you end up solving the ‘symptom’ with a (potentially expensive) piece of technology that is ill-equipped to address the underlying cause.

We offer a structured process to assess your current database estate and select a technology solution helps you get around this problem, reduce risks and fast track the path to your true objective with futureproofing, by forcing you to both identify the right problem and solve it the right way.

Assessment Framework

Digital Service Desk AI Software

Below are the three key drivers powering the assessment.

Accelerated Assessment:

  • Automated assessment and benchmark of existing and new database estates against industry best practices and standards.
  • Analyze & Finetune
    • Analyze assessment findings and implement recommendations on performance, consistency, and security aspect
  • NOC+ZERO TOUCH L2
    • Shift Left and Automate L1/L2 Service requests and incidents with help of Database COE- Automation experts

As we progress on our journey, we want to establish ourselves as a catalyst to help our customers future-proof technology and help in early adoption of new solutions seamlessly.

If you have any questions about the CoE, you may reach out to them at COE_DATABASE@gavstech.com

CoE Team Members

  • Ashwin Kumar K
  • Ayesha Yasmin
  • Backiyalakshmi M
  • Dharmeswaran P
  • Gopinathan Sivasubramanian
  • Karthikeyan Rajasekaran
  • Lakshmi Kiran  
  • Manju Vellaichamy  
  • Manjunath Kadubayi  
  • Nagarajan A  
  • Nirosha Venkatesalu  
  • Praveen kumar Ralla  
  • Praveena M  
  • Rajesh Kumar Reddy Mannuru  
  • Satheesh Kumar K  
  • Sivagami R  
  • Subramanian Krishnan
  • Venkatesh Raghavendran

RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.

Observability versus Monitoring

Sri Chaganty

“Observability” has become a key trend in Service Reliability Engineering practice.  One of the recommendations from Gartner’s latest Market Guide for IT Infrastructure Monitoring Tools released in January 2020 says, “Contextualize data that ITIM tools collect from highly modular IT architectures by using AIOps to manage other sources, such as observability metrics from cloud-native monitoring tools.”

Like so many other terms in software engineering, ‘observability’ is a term borrowed from an older physical discipline: in this case, control systems engineering. Let me use the definition of observability from control theory in Wikipedia: “observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.”

Observability is gaining attention in the software world because of its effectiveness at enabling engineers to deliver excellent customer experiences with software despite the complexity of the modern digital enterprise.

When we blew up the monolith into many services, we lost the ability to step through our code with a debugger: it now hops the network.  Monitoring tools are still coming to grips with this seismic shift.

How is observability different than monitoring?

Monitoring requires you to know what you care about before you know you care about it. Observability allows you to understand your entire system and how it fits together, and then use that information to discover what specifically you should care about when it’s most important.

Monitoring requires you to already know what normal is. Observability allows discovery of different types of ‘normal’ by looking at how the system behaves, over time, in different circumstances.

Monitoring asks the same questions over and over again. Is the CPU usage under 80%? Is memory usage under 75% percent? Or, is the latency under 500ms? This is valuable information, but monitoring is useful for known problems.

Observability, on the other side, is about asking different questions almost all the time. You discover new things.

Observability allows the discovery of different types of ‘normal’ by looking at behavior, over time, in different circumstances.

Metrics do not equal observability.

What Questions Can Observability Answer?

Below are sample questions that can be addressed by an effective observability solution:

  • Why is x broken?
  • What services does my service depend on — and what services are dependent on my service?
  • Why has performance degraded over the past quarter?
  • What changed? Why?
  • What logs should we look at right now?
  • What is system performance like for our most important customers?”
  • What SLO should we set?
  • Are we out of SLO?
  • What did my service look like at time point x?
  • What was the relationship between my service and x at time point y?
  • What was the relationship of attributed across the system before we deployed? What’s it like now?
  • What is most likely contributing to latency right now? What is most likely not?
  • Are these performance optimizations on the critical path?

About the Author –

Sri is a Serial Entrepreneur with over 30 years’ experience delivering creative, client-centric, value-driven solutions for bootstrapped and venture-backed startups.

Business with a Heart

Balaji Uppili

People and technology are converging like never before, as the world is gripped by COVID – 19. Just a few months ago, nobody could have predicted or foreseen the way businesses are having to work today.  As we were strategizing on corporate governance, digital transformation and the best of resiliency plans to ensure business continuity, no one ever anticipated the scale and enormity of COVID 19.

Today, it has become obvious that COVID 19 has brought about the convergence of technology and humanity and how it can change the way businesses work and function.  While we as leaders have been thinking largely about business outcomes, this pandemic has triggered a more humane approach, and the approach is here to stay.  The humane approach will be the differentiator and will prove the winner.

There is no doubt that this pandemic has brought an urgent need to accelerate our digital capabilities. With the focus on strong IT infrastructure and remote working, workforces were able to transition to working from home, meeting through video conferencing, and surprisingly, this has turned to increase the humane aspect of business relations – it has now become alright for both parties to be seeing children, spouses or pets in meeting backgrounds, and that in itself has broken down huge barriers and formalities.  It is refreshing to see the emerging empathy that is getting stronger with every meeting, and increasing collaboration and communication. It is becoming increasingly clear that we have overlooked the important factor of how it is that people have been showing up to work.  Suddenly it is now more visible that people have equally strong roles within the family – when we see parents having to home-school their children, or having other care obligations, we are viewing their personal lives and are able to empathize with them more.  We are seeing the impact that business can have on people and their personal lives and this is a never like before opportunity for leaders to put our people first.

And with customers being the center of every business, the situation of not being able to do in-person meetings has now warranted newer ways to collaborate and further strengthen the customer-centricity initiatives even more.  It has become evident that no matter how much we as leaders are thinking of automating operations, it is human connections that run businesses successfully. Lots of things have been unraveled – Important business imperatives like criticality of clean workspace compliance, the fact that offshoring thousands of miles away is not factually a compromise, but a very cost-effective and efficient way of getting things done. Productivity has also increased, and work done this far by, has a positive impact of at least 20% or even more in certain situations. As boundaries and barriers are broken, the rigidities of who should work on something and when they should work on it have all become less rigid.  Employees are less regimental about time.  Virtual crowd outsourcing has become the norm – you throw an idea at a bunch of people and whoever has the ability and the bandwidth to handle the task takes care of it, instead of a formal task assignment, and this highlights the fungibility of people.

All in all, the reset in the execution processes and introducing much more of a humane approach is here to stay and make the new norm even more exciting.

About the Author –

Balaji has over 25 years of experience in the IT industry, across multiple verticals. His enthusiasm, energy, and client focus is a rare gift, and he plays a key role in bringing new clients into GAVS. Balaji heads the Delivery department and passionately works on Customer delight. He says work is worship for him and enjoys watching cricket, listening to classical music, and visiting temples.

JAVA – Cache Management

Sivaprakash Krishnan

This article explores the offering of the various Java caching technologies that can play critical roles in improving application performance.

What is Cache Management?

A cache is a hot or a temporary memory buffer which stores most frequently used data like the live transactions, logical datasets, etc. This intensely improves the performance of an application, as read/write happens in the memory buffer thus reducing retrieval time and load on the primary source. Implementing and maintaining a cache in any Java enterprise application is important.

  • The client-side cache is used to temporarily store the static data transmitted over the network from the server to avoid unnecessarily calling to the server.
  • The server-side cache could be a query cache, CDN cache or a proxy cache where the data is stored in the respective servers instead of temporarily storing it on the browser.

Adoption of the right caching technique and tools allows the programmer to focus on the implementation of business logic; leaving the backend complexities like cache expiration, mutual exclusion, spooling, cache consistency to the frameworks and tools.

Caching should be designed specifically for the environment considering a single/multiple JVM and clusters. Given below multiple scenarios where caching can be used to improve performance.

1. In-process Cache – The In-process/local cache is the simplest cache, where the cache-store is effectively an object which is accessed inside the application process. It is much faster than any other cache accessed over a network and is strictly available only to the process that hosted it.

Data Center Consolidation Initiative Services

  • If the application is deployed only in one node, then in-process caching is the right candidate to store frequently accessed data with fast data access.
  • If the in-process cache is to be deployed in multiple instances of the application, then keeping data in-sync across all instances could be a challenge and cause data inconsistency.
  • An in-process cache can bring down the performance of any application where the server memory is limited and shared. In such cases, a garbage collector will be invoked often to clean up objects that may lead to performance overhead.

In-Memory Distributed Cache

Distributed caches can be built externally to an application that supports read/write to/from data repositories, keeps frequently accessed data in RAM, and avoid continuous fetching data from the data source. Such caches can be deployed on a cluster of multiple nodes, forming a single logical view.

  • In-memory distributed cache is suitable for applications running on multiple clusters where performance is key. Data inconsistency and shared memory aren’t matters of concern, as a distributed cache is deployed in the cluster as a single logical state.
  • As inter-process is required to access caches over a network, latency, failure, and object serialization are some overheads that could degrade performance.

2. In-memory database

In-memory database (IMDB) stores data in the main memory instead of a disk to produce quicker response times. The query is executed directly on the dataset stored in memory, thereby avoiding frequent read/writes to disk which provides better throughput and faster response times. It provides a configurable data persistence mechanism to avoid data loss.

Redis is an open-source in-memory data structure store used as a database, cache, and message broker. It offers data replication, different levels of persistence, HA, automatic partitioning that improves read/write.

Replacing the RDBMS with an in-memory database will improve the performance of an application without changing the application layer.

3. In-Memory Data Grid

An in-memory data grid (IMDG) is a data structure that resides entirely in RAM and is distributed among multiple servers.

Key features

  • Parallel computation of the data in memory
  • Search, aggregation, and sorting of the data in memory
  • Transactions management in memory
  • Event-handling

Cache Use Cases

There are use cases where a specific caching should be adapted to improve the performance of the application.

1. Application Cache

Application cache caches web content that can be accessed offline. Application owners/developers have the flexibility to configure what to cache and make it available for offline users. It has the following advantages:

  • Offline browsing
  • Quicker retrieval of data
  • Reduced load on servers

2. Level 1 (L1) Cache

This is the default transactional cache per session. It can be managed by any Java persistence framework (JPA) or object-relational mapping (ORM) tool.

The L1 cache stores entities that fall under a specific session and are cleared once a session is closed. If there are multiple transactions inside one session, all entities will be stored from all these transactions.

3. Level 2 (L2) Cache

The L2 cache can be configured to provide custom caches that can hold onto the data for all entities to be cached. It’s configured at the session factory-level and exists as long as the session factory is available.

  • Sessions in an application.
  • Applications on the same servers with the same database.
  • Application clusters running on multiple nodes but pointing to the same database.

4. Proxy / Load balancer cache

Enabling this reduces the load on application servers. When similar content is queried/requested frequently, proxy takes care of serving the content from the cache rather than routing the request back to application servers.

When a dataset is requested for the first time, proxy saves the response from the application server to a disk cache and uses them to respond to subsequent client requests without having to route the request back to the application server. Apache, NGINX, and F5 support proxy cache.

Desktop-as-a-Service (DaaS) Solution

5. Hybrid Cache

A hybrid cache is a combination of JPA/ORM frameworks and open source services. It is used in applications where response time is a key factor.

Caching Design Considerations

  • Data loading/updating
  • Performance/memory size
  • Eviction policy
  • Concurrency
  • Cache statistics.

1. Data Loading/Updating

Data loading into a cache is an important design decision to maintain consistency across all cached content. The following approaches can be considered to load data:

  • Using default function/configuration provided by JPA and ORM frameworks to load/update data.
  • Implementing key-value maps using open-source cache APIs.
  • Programmatically loading entities through automatic or explicit insertion.
  • External application through synchronous or asynchronous communication.

2. Performance/Memory Size

Resource configuration is an important factor in achieving the performance SLA. Available memory and CPU architecture play a vital role in application performance. Available memory has a direct impact on garbage collection performance. More GC cycles can bring down the performance.

3. Eviction Policy

An eviction policy enables a cache to ensure that the size of the cache doesn’t exceed the maximum limit. The eviction algorithm decides what elements can be removed from the cache depending on the configured eviction policy thereby creating space for the new datasets.

There are various popular eviction algorithms used in cache solution:

  • Least Recently Used (LRU)
  • Least Frequently Used (LFU)
  • First In, First Out (FIFO)

4. Concurrency

Concurrency is a common issue in enterprise applications. It creates conflict and leaves the system in an inconsistent state. It can occur when multiple clients try to update the same data object at the same time during cache refresh. A common solution is to use a lock, but this may affect performance. Hence, optimization techniques should be considered.

5. Cache Statistics

Cache statistics are used to identify the health of cache and provide insights about its behavior and performance. Following attributes can be used:

  • Hit Count: Indicates the number of times the cache lookup has returned a cached value.
  • Miss Count: Indicates number of times cache lookup has returned a null or newly loaded or uncached value
  • Load success count: Indicates the number of times the cache lookup has successfully loaded a new value.
  • Total load time: Indicates time spent (nanoseconds) in loading new values.
  • Load exception count: Number of exceptions thrown while loading an entry
  • Eviction count: Number of entries evicted from the cache

Various Caching Solutions

There are various Java caching solutions available — the right choice depends on the use case.

Software Test Automation Platform

At GAVS, we focus on building a strong foundation of coding practices. We encourage and implement the “Design First, Code Later” principle and “Design Oriented Coding Practices” to bring in design thinking and engineering mindset to build stronger solutions.

We have been training and mentoring our talent on cutting-edge JAVA technologies, building reusable frameworks, templates, and solutions on the major areas like Security, DevOps, Migration, Performance, etc. Our objective is to “Partner with customers to realize business benefits through effective adoption of cutting-edge JAVA technologies thereby enabling customer success”.

About the Author –

Sivaprakash is a solutions architect with strong solutions and design skills. He is a seasoned expert in JAVA, Big Data, DevOps, Cloud, Containers, and Micro Services. He has successfully designed and implemented a stable monitoring platform for ZIF. He has also designed and driven Cloud assessment/migration, enterprise BRMS, and IoT-based solutions for many of our customers. At present, his focus is on building ‘ZIF Business’ a new-generation AIOps platform aligned to business outcomes.

IoT Adoption during the Pandemic

Artificial Intelligence for IT Operations

Naveen KT

From lightbulbs to cities, IoT is adding a level of digital intelligence to various things around us. Internet of Things or IoT is physical devices connected to the internet, all collecting and sharing data, which can then be used for various purposes. The arrival of super-cheap computers and the ubiquity of wireless networks are behind the widespread adoption of IoT. It is possible to turn any object, from a pill to an airplane, into an IoT-enabled device. It is making devices smarter by letting them ‘sense’ and communicate, without any human involvement.

Let us look at the developments that enabled the commercialization of IoT.

History

The idea of integrating sensors and intelligence to basic objects dates to the 1980s and 1990s. But the progress was slow because the technology was not ready. Chips were too big and bulky and there was no way for an object to communicate effectively.

Processors had to be cheap and power-frugal enough to be disposed of before it finally becomes cost-effective to connect to billions of devices. The adoption of RFID tags and IPV6 was a necessary step for IoT to scale.

Kevin Ashton penned the phrase ‘Internet of Things’ in 1999. Although it took a decade for this technology to catch up with his vision. According to Ashton “The IoT integrates the interconnectedness of human culture (our things) with our digital information system(internet). That’s the IoT”.

Early suggestions for IoT include ‘Blogjects’ (object that blog and record data about themselves to the internet), Ubiquitous computing (or ‘ubicomp’), invisible computing, and pervasive computing.

How big is IoT?

AIOps in Infrastructure Management

IDC predicts that there will be 41.6 billion connected IoT devices by 2025. It also suggests industrial and automotive equipment represent the largest opportunity of connected ‘things’.

Gartner predicts that the enterprise and automotive sectors will account for 5.8 billion devices this year.

However, the COVID-19 pandemic has further enhanced the need for IoT-enabled devices to help the nations tackle the crisis.

IoT for the Government

Information about the movement of citizens is urgently required by governments to track the spread of the virus and potentially monitor their quarantine measures. Some IoT operators have solutions that could serve these purposes.

AIOps platform
  • Telia’s Division X has developed Crowd Insights which provides aggregated smartphone data to city and transport authorities of Nordic Countries. It is using the tool which will track the movement of citizens during the quarantine.
  • Vodafone provides insights on traffic congestion.
  • Telefonica developed Smart steps, which aggregates data on footfall and movement for the transport, tourism, and retail sectors.

Personal data of people will also help in tracking clusters of infection by changing the privacy regulations. For example, in Taiwan, high-risk quarantined patients were being monitored through their mobile phones to ensure compliance with quarantine rules. In South Korea, the officials track infected citizens and alert others if they come into contact with them. The government of Israel went as far as passing an emergency law to monitor the movement of infected citizens via their phones.

China is already using mass temperature scanning devices in public areas like airports. A team of researchers at UMass Amherst is testing a device that can analyze coughing sounds to identify the presence of flu-like symptoms among crowds.

IoT in Health care

COVID-19 could be the trigger to explore new solutions and be prepared for any such future pandemics, just as the SARS epidemic in 2003 which spurred the governments in South Korea and Taiwan to prepare for today’s problems.

IT operations analytics

Remote patient monitoring (RPM) and telemedicine could be helpful in managing a future pandemic. For example, patients with chronic diseases who are required to self-isolate to reduce their exposure to COVID-19 but need continuous care would benefit from RPM. Operators like Orange, Telefónica, and Vodafone already have some experience in RPM.

Connected thermometers are being used in hospitals to collect data while maintaining a social distance. Smart wearables are also helpful in preventing the spread of the virus and responding to those who might be at risk by monitoring their vital signs.

Connected thermometers are being used in hospitals to collect data while maintaining a social distance. Smart wearables are also helpful in preventing the spread of the virus and responding to those who might be at risk by monitoring their vital signs.

Telehealth is widely adopted in the US, and the authorities there are relaxing reimbursement rules and regulations to encourage the extension of specific services. These include the following.

  • Medicare, the US healthcare program for senior citizens, has temporarily expanded its telehealth service to enable remote consultations.
  • The FCC has made changes to the Rural Health Care (RHC) and E-Rate programs to support telemedicine and remote learning. Network operators will be able to provide incentives or free network upgrades that were previously not permitted, for example, for hospitals that are looking to expand their telemedicine programs.

IoT for Consumers

The IoT promises to make our environment smarter, measurable, and interactive.COVID-19 is highly contagious, and it can be transmitted from one to another even by touching the objects used by the affected person. The WHO has instructed us to disinfect and sanitize high touch objects. IoT presents us with an ingenious solution to avoid touching these surfaces altogether. Hands-free and sensor-enabled devices and solutions like smart lightbulbs, door openers, smart sinks, and others help prevent the spread of the virus.

Security aspects of IoT

Security is one of the biggest issues with the IoT. These sensors collect extremely sensitive data like what we say and do in our own homes and where we travel. Many IoT devices lack security patches, which means they are permanently at risk. Hackers are now actively targeting IoT devices such as routers and webcams because of their inherent lack of security makes them easy to compromise and pave the way to giant botnets.

Machine learning service provider
Machine learning service provider

IoT bridges the gap between the digital and the physical world which means hacking into devices can have dangerous real-world consequences. Hacking into sensors and controlling the temperature in power stations might end up in catastrophic decisions and taking control of a driverless car could also end in disaster.

Overall IoT makes the world around us smarter and more responsive by merging the digital and physical universe. IoT companies should look at ways their solutions can be repurposed to help respond to the crisis.

Enterprise IT infrastructure services
Enterprise IT infrastructure services

References:

  • https://www.analysysmason.com/Research/Content/Comments/covid19-iot-role-rdme0-rma17/
  • shorturl.at/wBFGT

Naveen is a software developer at GAVS. He teaches underprivileged children and is interested in giving back to society in as many ways as he can. He is also interested in dancing, painting, playing keyboard, and is a district-level handball player.