From Good to Great – DNA of a Successful Leader (PART II)

Rajeswari S

Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others” – Jack Welch

In my previous article, I wrote about a few qualities that make for a good leader. In this article, I discuss a few ways in which a leader can become great from good.

  1. Seek to understand and be understood: Seeking feedback and taking criticisms is not an easy task for anyone. When you are holding a leadership position and people look up to you, it is even more difficult. But a true leader does exactly that and does it HONESTLY. A good leader focuses on the needs of others. When you are open to feedback and constructive criticism, you have the right to give the same to others. Make genuine efforts to listen when your team speaks. Great leaders listen first, speak second.
  1. Be there: Being there is just not about being the center of attention. You need to be there for your people during critical times and help members across your organization find solutions to roadblocks. Mentorship is an art. Your people should accept you as their mentor and gaining that space is not as easy.
  1. Demonstrate empathy and compassion: This quality is an extension of the previous point. When you are laser-focused on your goals, it can be difficult to focus on the needs of others around. You need to know not only how your actions affect people, but what you need to do to show understanding and sympathy for others.
  1. Get curious: Leaders are often driven with an insatiable desire to learn; they push the limits of what’s possible and explore opportunities as a continuous process. Expanding your mind can often be as simple as reading and asking ‘why’ more often. Curiosity can help you to get to the root of a problem and promote better ideas and thoughts. Leaders think and embrace others’ ideas. A correctly asked question with the right intention could lead to many opportunities and achievements.
  1. Be in the know: Leaders go out of their way to stay educated and up-to-date. Intentional learning is a continuous process of acquiring, understanding information with the goal of making yourself more intelligent and prepared on a specific subject. People cannot always see your work, it is how you talk that creates the first impression. When you make an informed or up-to-date speech, you get the edge over others.
  1. Enjoy the ride: Smart leaders know that their journey is often more rewarding than their destination. Which is why they take the time to enjoy life and what they have already achieved because they know nothing can last forever. When you can enjoy the journey, you’ll be amazed by what you can learn. A great leader embraces each day as an experience. They grow every day!
  1. Celebrate and Connect: Leaders working toward a brighter future share their success with the people they care about – business partners and customers, family and friends, employees, and their families, etc. Great leaders celebrate other’s victory as their own; this creates a high-performing team and culture. A true captain takes time to know about the people around her and their lives. It goes a long way in running not only a successful business but a happy one too!
  1. Pursue new experiences: Mountains are interesting to watch and hike. Why? Because of its rugged terrain and unpredictable nature. Straight roads are boring, that is why we sleep on a highway drive! An intelligent leader is never complacent and constantly pushes himself out of his comfort zone. To stay prepared for any bumps along the road, leaders actively pursue new experiences that allow them to learn and grow. From starting a new venture to coaching a little league to diversifying the business.

Unique brands of Leadership

A quick look at successful CEOs, new-age entrepreneurs, and their unique leadership mantras:

Ø  Sundar Pichai, CEO, Alphabet Inc. and its subsidiary Google LLC

Leadership mantra:

  1. Never forget your roots
  2. Focus more on others’ success than your own
  3. Empower the youth
  4. Stay humble and keep learning

Ø  Bill Gates, Founder, Microsoft

Leadership mantra: 

  1. Knowledge is different from wisdom
  2. Take a step-by-step approach to make progress towards your vision
  3. Empower people to create new opportunities to explore ideas; Embrace creativity
  4. Be caring and passionate

Ø  Suchi Mukherjee, CEO, Limeroad, an Indian online marketplace
Leadership mantra: True leadership is about enabling the voice of the youngest team member.

Ø  Amit Agarwal, CEO, NoBroker, a real estate search portal
Leadership mantra: Leaders provide employees the opportunity to be leaders themselves.

References   

About the Author –

Rajeswari is part of the IP team at GAVS. She is involved in technical and creative content development for the past 13 years. She is passionate about music and writing and spends her free time watching movies or going for a highway drive.

 

Container Security

Anandharaj V

We live in a world of innovation and are beneficiaries of new advancements. New advancements in software technology also comes with potential security vulnerabilities.

‘Containers’ are no exception. Let us first understand what a container is and then the vulnerabilities associated with it and how to mitigate them.

What is a Container?

You might have seen containers in the shipyard. It is used to isolate different cargos which is transported via ships. In the same way, software technologies use a containerization approach.

Containers are different from Virtual Machines (VM) where VMs need a guest operating system which runs on a host operating system (OS). Containers uses OS virtualization, in which required processes, CPU, Memory, and disk are virtualized so that containers can run without a separate operating system.

In containers, software and its dependencies are packaged so that it can run anywhere whether on-premises desktop or in the cloud.

IT Infrastructure Managed Services

Source: https://cloud.google.com/containers

As stated by Google, “From Gmail to YouTube to Search, everything at Google runs in containers”.

Container Vulnerabilities and Countermeasures

Containers Image Vulnerabilities

While creating a container, an image may be patched without any known vulnerabilities. But a vulnerability might have been discovered later, while the container image is no longer patched. For traditional systems, it can be patched when there is a fix for the vulnerability without making any changes but for containers, updates should be upstreamed in the images, and then redeployed. So, containers have vulnerabilities because of the older image version which is deployed.

Also, if the container image is misconfigured or unwanted services are running, it will lead to vulnerabilities.

Countermeasures

If you use traditional vulnerability assessment tools to assess containers, it will lead to false positives. You need to consider a tool that has been designed to assess containers so that you can get actionable and reliable results.

To avoid container image misconfiguration, you need to validate the image configuration before deploying.

Embedded Malware and Clear Text Secrets

Container images are collections of files packaged together. Hence, there are chances of malicious files getting added unintentionally or intentionally. That malicious software will have the same effect as of the traditional systems.

If secrets are embedded in clear text, it may lead to security risks if someone unauthorized gets access.

Countermeasures

Continuous monitoring of all images for embedded malware with signature and behavioral detection can mitigate embedded malware risks.

 Secrets should never be stored inside of containers image and when required, it should be provided dynamically at runtime.

Use of Untrusted Images

Containers have the advantages of ease of use and portability. This capability may lead teams to run container images from a third party without validating it and thus can introducing data leakage, malware, or components with known vulnerabilities.

Countermeasures

Your team should maintain and use only trusted images, to avoid the risk of untrusted or malicious components being deployed.

Registry Risks

Registry is nothing but a repository for storing container images.

  1. Insecure connections to registries

Images can have sensitive information. If connections to registries are performed over insecure channels, it can lead to man-in-the-middle attacks that could intercept network traffic to steal programmer or admin credentials to provide outdated or fraudulent images.

You should configure development tools and containers while running, to connect only over the encrypted medium to overcome the unsecured connection issue.

  1. Insufficient authentication and authorization restrictions

As we have already seen that registries store container images with sensitive information. Insufficient authentication and authorization will result in exposure of technical details of an app and loss of intellectual property. It also can lead to compromise of containers.

Access to registries should authenticated and only trusted entities should be able to add images and all write access should be periodically audited and read access should be logged. Proper authorization controls should be enabled to avoid the authentication and authorization related risks.

Orchestrator Risks

  1. Unbounded administrative access

There are many orchestrators designed with an assumption that all the users are administrators but, a single orchestrator may run different apps with different access levels. If you treat all users as administrators, it will affect the operation of containers managed by the orchestrator.

Orchestrators should be given the required access with proper role-based authorization to avoid the risk of unbounded administrative access.

  1. Poorly separated inter-container network traffic

In containers, traffic between the host is routed through virtual overlay networks. This is managed by the orchestrator. This traffic will not be visible to existing network security and management tools since network filters only see the encrypted packets traveling between the hosts and will lead to security blindness. It will be ineffective in monitoring the traffic.

To overcome this risk, orchestrators need to configure separate network traffic as per the sensitivity levels in the virtual networks.

  1. Orchestrator node trust

You need to give special attention while maintaining the trust between the hosts, especially the orchestrator node. Weakness in orchestrator configuration will lead to increased risk. For example, communication can be unencrypted and unauthenticated between the orchestrator, DevOps personnel, and administrators.

To mitigate this, orchestration should be configured securely for nodes and apps. If any node is compromised, it should be isolated and removed without disturbing other nodes.

Container Risks

  1. App vulnerabilities

It is always good to have a defense. Even after going through the recommendations, we have seen above; containers may still be compromised if the apps are vulnerable.

As we have already seen that traditional security tools may not be effective when you use it for containers. So, you need a container aware tool which will detect behavior and anomalies in the app at run time to find and mitigate it.

  1. Rogue containers

It is possible to have rogue containers. Developers may have launched them to test their code and left it there. It may lead to exploits as those containers might not have been thoroughly checked for security loopholes.

You can overcome this by a separate environment for development, test, production, and with a role-based access control.

Host OS Risks

  1. Large attack surface

Every operating system has its attack surface and the larger the attack surface, the easier it will be for the attacker to find it and exploit the vulnerability and compromise the host operating system and the container which run on it.

You can follow the NIST SP 800-123 guide to server security if you cannot use container specific operating system to minimize the attack surface.

  1. Shared kernel

If you only run containers on a host OS you will have a smaller attack surface than the normal host machine where you will need libraries and packages when you run a web server or a database and other software.

You should not mix containers and non-containers workload on the same host machine.

If you wish to further explore this topic, I suggest you read NIST.SP.800-190.


References

About the Author –

Anandharaj is a lead DevSecOps at GAVS and has over 13 years of experience in Cybersecurity across different verticals which include Network Security, application Security, computer forensics and cloud security.

Customer Focus Realignment in a Pandemic Economy

Ashish Joseph

Business Environment Overview

The Pandemic Economy has created an environment that has tested businesses to either adapt or perish. The atmosphere has become a quest for the survival of the fittest. On the brighter side, organizations have stepped up and adapted to the crisis in a way that they have worked faster and better than ever before. 

During this crisis, companies have been strategic in understanding their focus areas and where to concentrate on the most. From a high-level perspective, we can see that businesses have focused on recovering the sources of their revenues, rebuilding operations, restructuring the organization, and accelerating their digital transformation initiatives. In a way, the pandemic has forced companies to optimize their strategies and harness their core competencies in a hyper-competitive and survival environment.

Need for Customer Focused Strategies

A pivotal and integral strategy to maintain and sustain growth is for businesses to avoid the churn of their existing customers and ensure the quality of delivery can build their trust for future collaborations and referrals. Many organizations, including GAVS, have understood that Customer Experience and Customer Success is consequential for customer retention and brand affinity. 

Businesses should realign themselves in the way they look at sales funnels. A large portion of the annual budget is usually allocated towards the top of the funnel activities to acquire more customers. But companies with customer success engraved in their souls, believe in the ideology that the bottom of the funnel feeds the top of the funnel. This strategy results in a self-sustaining and recurring revenue model for the business.

An independent survey conducted by the Customer Service Managers and Professionals Journal has found that companies pay 6x times more to acquire new customers than to keep an existing one. In this pandemic economy, the costs for customer acquisition will be much higher than before as organizations must be very frivolous in their spending. The best step forward is to make sure the companies strive for excellence in their customer experience and deliver measurable value to them. A study conducted by Bain and Company titled “Prescription for Cutting Costs” talks about how increasing customer retention by 5% increases profits from 25%-95%. 

The path to a sustainable and high growth business is to adopt customer-centric strategies that yield more value and growth for its customers. Enhancing customer experience should be prime and proper governance must be in place to monitor and gauge strategies. Governance in the world of the customer experience must revolve around identifying and managing resources needed to drive sustained actions, establishing robust procedures to organize processes, and ensuring a framework for stellar delivery.

Scaling to ever-changing customer needs

A research body called Walker Information conducted an independent research on B2B companies focusing on key initiatives that drive customer experiences and future growth. The study included various customer experience leaders, senior executives, and influencers representing a diverse set of business models in the industry. They published the report titled “Customer 2020: A Progress Report” and the following are strategies that best meet the changing needs of customers in the B2B landscape.

AI Devops Automation Service Tools

Over 45% of the leaders highlighted the importance of developing a customer-centric culture that simplifies products and processes for the business. Now the question that we need to ask ourselves is, how do we as an organization scale up to these demands of the market? I strongly believe that each of us, in the different roles we play in the organization, has an impact.

The Executive Team can support more customer experience strategies, formulate success metrics, measure the impact of customer success initiatives, and ensure alignment with respect to the corporate strategy.

The Client Partners can ensure that they represent the voice of the customer, plot a feasible customer experience roadmap, be on point with customer intelligence data, and ensure transparency and communication with the teams and the customers. 

The cross-functional team managers and members can own and execute process improvements, personalize and customize customer journeys, and monitor key delivery metrics.

When all these members work in unison, the target goal of delivery excellence coupled with customer success is always achievable.

Going Above and Beyond

Organizations should aim for customers who can be retained for life. The retention depends upon how much a business is willing to go the extra mile to add measurable value to its customers. Business contracts should evolve into partnerships that collaborate on their competitive advantages that bring solutions to real-world business problems. 

As customer success champions, we should reevaluate the possibilities in which we can make a difference for our customers. By focusing on our core competencies and using the latest tools in the market, we can look for avenues that can bring effort savings, productivity enhancements, process improvements, workflow optimizations, and business transformations that change the way our customers do business. 

After all, We are GAVS. We aim to galvanize a sense of measurable success through our committed teams and innovative solutions. We should always stride towards delivery excellence and strive for customer success in everything we do.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Quantum Computing

Vignesh Ramamurthy

Vignesh Ramamurthy

In the MARVEL multiverse, Ant-Man has one of the coolest superpowers out there. He can shrink himself down as well as blow himself up to any size he desires! He was able to reduce to a subatomic size so that he could enter the Quantum Realm. Some fancy stuff indeed.

Likewise, there is Quantum computing. Quantum computers are more powerful than supercomputers and tech companies like Google, IBM, and Rigetti have them.

Google had achieved Quantum Supremacy with its Quantum computer ‘Sycamore’ in 2019. It claims to perform a calculation in 200 seconds which might take the world’s most powerful supercomputer 10,000 years. Sycamore is a 54-qubit computer. Such computers need to be kept under special conditions with temperature being close to absolute zero.

quantum computing

Quantum Physics

Quantum computing falls under a discipline called Quantum Physics. Quantum computing’s heart and soul resides in what we call as Qubits (Quantum bits) and Superposition. So, what are they?

Let’s take a simple example, imagine you have a coin and you spin it. One cannot know the outcome unless it falls flat on a surface. It can either be a head or a tail. However, while the coin is spinning you can say the coin’s state is both heads and tails at the same time (qubit). This state is called Superposition.

So, how do they work and what does it mean?

We know bits are a combination of 0s and 1s (negative or positive states). Qubits have both at the same time. These qubits, in the end, pass through something called “Grover Operator” which washes away all the possibilities, but one.

Hence, from an enormous set of combinations, a single positive outcome remains, just like how Doctor Strange did in the movie Infinity War. However, what is important is to understand how this technically works.

We shall see 2 explanations which I feel could give an accurate picture on the technical aspect of it.

In Quantum Mechanics, the following is as explained by Scott Aaronson, a Quantum scientist from the University of Texas, Austin.

Amplitude – an amplitude of a positive and a negative state. These could also be considered as an amplitude for being 0, and also an amplitude for being 1. The goal for an amplitude here is to make sure that amplitudes leading to wrong answers cancel each other out. Hence this way, amplitude with the right answer remains the only possible outcome.

Quantum computers function using a process called superconductivity. We have a chip the size of an ordinary computer chip. There are little coils of wire in the chip, nearly big enough to see with the naked eye. There are 2 different quantum states of current flowing through these coils, corresponding to 0 and 1, or the superpositions of them.

These coils interact with each other, nearby ones talk to each other and generate a state called an entangled state which is an essential state in Quantum computing. The way qubits interact are completely programmable, so we can send electrical signals to these qubits, and tweak them according to our requirements. This whole chip is placed in a refrigerator with a temperature close to absolute zero. This way superconductivity occurs which makes it to briefly behave as qubits.

Following is the explanation given according to ‘Kurzgesagt — In a Nutshell’, a YouTube channel.

We know a bit is either a 0 or 1. Now, 4 bits mean 0000 and so on. In a qubit, 4 classical bits can be in one of the 2^4 different configurations at once. That is 16 possible combinations out of which we can use just one. 4 qubits in position can be in all those 16 combinations at once.

This grows exponentially with each extra qubit. 20 qubits can hence store a million values in parallel. As seen, these entangled states interact with each other instantly. Hence while measuring one entangled qubit, we can directly deduce the property of its partners.

A normal logic gate gets a simple set of inputs and produces one definite output. A quantum gate manipulates an input of superpositions, rotates probabilities, and produces another set of superpositions as its output.

Hence a quantum computer sets up some qubits, applies quantum gates to entangle them, and manipulates probabilities. Now it finally measures the outcome, collapsing superpositions to an actual sequence of 0s and 1s. This is how we get the entire set of calculations performed at the same time.

What is a Grover Operator?

We now know that while taking one entangled qubit, it is possible to easily deduce properties for all the partners. Grover algorithm works because of these quantum particles being entangled. Since one entangled qubit is able to vouch for the partners, it iterates until it finds the solution with higher degrees of confidence.

What can they do?

As of now, quantum computing hasn’t been implemented in real-life situations just because the world right now doesn’t have such an infrastructure.

Assuming they are efficient and ready to be used. We can make use of it in the following ways: 1) Self-driving cars are picking up pace. Quantum computers can be used on these cars by calculating all possible outcomes on the road. Apart from sensors to reduce accidents, roads consist of traffic signals. A Quantum computer will be able to go through all the possibilities of how traffic signals

function, the time interval, traffic, everything, and feed these self-driving cars with the single best outcome accordingly. Hence, what would result is nothing but a seamless commute with no hassles whatsoever. It’ll be the future as we see in movies.

2) If AI is able to construct a circuit board after having tried everything in the design architecture, this could result in promising AI-related applications.

Disadvantages

RSA encryption is the one that underpins the entire internet. It could breach it and hackers might steal top confidential information related to Health, Defence, personal information, and other sensitive data. At the same time, it could be helpful to achieve the most secure encryption, by identifying the best one amongst every possible encryption. This can be made by finding out the most secure wall to break all the viruses that could infect the internet. If such security is made, it would take a completely new virus to break it. But the chances are very minuscule.

Quantum computing has its share of benefits. However, this would take years to be put to use. Infrastructure and the amount of investment to make is humongous. After all, it could only be used when there are very reliable real-time use cases. It needs to be tested for many things. There is no doubt that Quantum Computing will play a big role in the future. However, with more sophisticated technology, comes more complex problems. The world will take years to be prepared for it.

References:

About the Author –

Vignesh is part of the GAVel team at GAVS. He is deeply passionate about technology and is a movie buff.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Healthcare Data Sharing

Srinivasan Sundararajan

Patient Care Redefined

The fight against the novel coronavirus has witnessed transformational changes in the way patient care is defined and managed. Proliferation of telemedicine has enabled consultations across geographies. In the current scenario, access to patients’ medical records has also assumed more importance.

The journey towards a solution also taught us that research on patient data is equally important. More the sample data about the infected patients, the better the vaccine/remedy. However, the growing concern about the privacy of patient data cannot be ignored. Moreover, patients who provide their data for medical research should also benefit from a monetary perspective, for their contributions.

The above facts basically point to the need for being able to share vital healthcare data efficiently so that patient care is improved, and more lives are saved.

The healthcare industry needs a data-sharing framework, which shares patient data but also provides much-needed controls on data ownership for various stakeholders, including the patients.

Types of Healthcare Data

  • PHR (Personal Health Record): An electronic record of health-related information on an individual that conforms to nationally recognized interoperability standards and that can be drawn from multiple sources while being managed, shared, and controlled by the individual.
  • EMR (Electronic Medical Record): Health-related information on an individual that can be created, gathered, managed, and consulted by authorized clinicians and staff within one healthcare organization. 
  • EHR (Electronic Health Record): Health-related information on an individual that conforms to nationally recognized interoperability standards and that can be created, managed and consulted by authorized clinicians and staff across more than one healthcare organization. 

In the context of large multi-specialty hospitals, EMR could also be specific to one specialist department and EHR could be the combination of information from various specialist departments in a single unified record.

Together these 3 forms of healthcare data provide a comprehensive view of a patient (patient 360), thus resulting in quicker diagnoses and personalized quality care.

Current Challenges in Sharing Healthcare Data

  • Lack of unique identity for patients prevents a single version of truth. Though there are government-issued IDs like SSN, their usage is not consistent across systems.
  • High cost and error-prone integration options with provider controlled EMR/EHR systems. While there is standardization with respect to healthcare interoperability API specifications, the effort needed for integration is high.
  • Conflict of interest in ensuring patient privacy and data integrity, while allowing data sharing. Digital ethics dictate that patient consent management take precedence while sharing their data.
  • Monetary benefits of medical research on patient data are not passed on to patients. As mentioned earlier, in today’s context analyzing existing patient information is critical to finding a cure for diseases, but there are no incentives for these patients.
  • Data stewardship, consent management, compliance needs like HIPAA, GDPR. Let’s assume a hospital specializing in heart-related issues shares a patient record with a hospital that specializes in eye care. How do we decide which portions of the patient information is owned by which hospital and how the governance is managed?
  • Lack of real-time information attributing to data quality issues and causing incorrect diagnoses.

The above list is not comprehensive but points to some of the issues that are plaguing the current healthcare data-sharing initiatives.

Blockchain for Healthcare Data Sharing

Some of the basic attributes of blockchain are mentioned below:

  • Blockchain is a distributed database, whereby each node of the database can be owned by a different stakeholder (say hospital departments) and yet all updates to the database eventually converge resulting in a distributed single version of truth.
  • Blockchain databases utilize a cryptography-based transaction processing mechanism, such that each object stored inside the database (say a patient record) can be distinctly owned by a public/private key pair and the ownership rights carry throughout the life cycle of the object (say from patient admission to discharge).
  • Blockchain transactions are carried out using smart contracts which basically attach the business rules to the underlying data, ensuring that the data is always compliant with the underlying business rules, making it even more reliable than the data available in traditional database systems.

These underlying properties of Blockchain make it a viable technology platform for healthcare data sharing, as well as to ensure data stewardship and patient privacy rights.

GAVS Rhodium Framework for Healthcare Data Sharing

GAVS has developed a framework – ‘Rhodium’, for healthcare data sharing.

This framework combines the best features of multi-modal databases (relational, nosql, graph) along with the viability of data sharing facilitated by Blockchain, to come up with a unified framework for healthcare data sharing.

The following are the high-level components (in a healthcare context) of the Rhodium framework. As you can see, each of the individual components of Rhodium play a role in healthcare information exchange at various levels.

GAVS’ Rhodium Framework for Healthcare

GAVS has also defined a maturity model for healthcare organizations for utilizing the framework towards healthcare data sharing. This model defines 4 stages of healthcare data sharing:

  • Within a Hospital 
  • Across Hospitals
  • Between Hospitals & Patients
  • Between Hospitals, Patients & Other Agencies

The below progression diagram illustrates how the framework can be extended for various stages of the life cycle, and typical use cases that are realized in each phase. Detailed explanations of various components of the Rhodium framework, and how it realizes use cases mentioned in the different stages will be covered in subsequent articles in this space.

Rhodium Patient Date Sharing Journey

Benefits of the GAVS Rhodium Framework for Healthcare Data Sharing

The following are the general foreseeable benefits of using the Rhodium framework for healthcare data sharing.

AIOps Digital Transformation Solutions

Healthcare Industry Trends with Respect to Data Sharing

The following are some of the trends we are seeing in Healthcare Data Sharing:

  • Interoperability will drive privacy and security improvements
  • New privacy regulations will continue to come up, in addition to HIPAA
  • The ethical and legal use of AI will empower healthcare data security and privacy
  • The rest of 2020 and 2021 will be defined by the duality of data security and data integration, and providers’ ability to execute on these priorities. That, in turn, will, in many ways, determine their effectiveness
  • In addition to industry regulations like HIPAA, national data privacy standards including Europe’s GDPR, California’s Consumer Privacy Act, and New York’s SHIELD Act will further increase the impetus for providers to prioritize privacy as a critical component of quality patient care

The below documentation from the HIMSS site talks about maturity levels with respect to healthcare interoperability, which is addressed by the Rhodium framework.

Source: https://www.himss.org/what-interoperability

This framework is in its early stages of experimentation and is a prototype of how a Blockchain + Multi-Modal Database powered solution could be utilized for sharing healthcare data, that would be hugely beneficial to patients as well as healthcare providers.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi-Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Center of Excellence – Big Data

The Big Data CoE is a team of experts that experiments and builds various cutting-edge solutions by leveraging the latest technologies, like Hadoop, Spark, Tensor-flow, and emerging open-source technologies, to deliver robust business results. A CoE is where organizations identify new technologies, learn new skills, and develop appropriate processes that are then deployed into the business to accelerate adoption.

Leveraging data to drive competitive advantage has shifted from being an option to a requirement for hyper competitive business landscape. One of the main objectives of the CoE is deciding on the right strategy for the organization to become data-driven and benefit from a world of Big Data, Analytics, Machine Learning and the Internet of Things (IoT).

Cloud Migration Assessment Tool for Business
Triple Constraints of Projects

“According to Chaos Report, 52% of the projects are either delivered late or run over the allocated. The average across all companies is 189% of the original cost estimate. The average cost overrun is 178% for large companies, 182% for medium companies, and 214% for small companies. The average overrun is 222% of the original time estimate. For large companies, the average is 230%; for medium companies, the average is 202%; and for small companies, the average is 239%.”

Big Data CoE plays a vital role in bringing down the cost and reducing the response time to ensure project is delivered on time by helping the organization to build the skillful resources.

Big Data’s Role

Helping the organization to build quality big data applications on their own by maximizing their ability to leverage data. Data engineers are committed to helping ensure the data:

  • define your strategic data assets and data audience
  • gather the required data and put in place new collection methods
  • get the most from predictive analytics and machine learning
  • have the right technology, data infrastructure, and key data competencies
  • ensure you have an effective security and governance system in place to avoid huge financial, legal, and reputational problems.
Cyber Security and Compliance Services

Data Analytics Stages

Architecture optimized building blocks covering all data analytics stages: data acquisition from a data source, preprocessing, transformation, data mining, modeling, validation, and decision making.

Cyber Security Mdr Services

Focus areas

Algorithms support the following computation modes:

  • Batch processing
  • Online processing
  • Distributed processing
  • Stream processing

The Big Data analytics lifecycle can be divided into the following nine stages:

  • Business Case Evaluation
  • Data Identification
  • Data Acquisition & Filtering
  • Data Extraction
  • Data Validation & Cleansing
  • Data Aggregation & Representation
  • Data Analysis
  • Data Visualization
  • Utilization of Analysis Results

A key focus of Big-data CoE is to establish a data-driven organization by developing proof of concept with the latest technologies with Big Data and Machine learning models. As of part of CoE initiatives, we are involved in developing the AI widgets to various market places, such as Azure, AWS, Magento and others. We are also actively involved in engaging and motivating the team to learn cutting edge technologies and tools like Apache Spark and Scala. We encourage the team to approach each problem in a pragmatic way by making them understand the latest architectural patterns over the traditional MVC methods.

It has been established that business-critical decisions supported by data-driven insights have been more successful. We aim to take our organization forward by unleashing the true potential of data!

If you have any questions about the CoE, you may reach out to them at SME_BIGDATA@gavstech.com

CoE Team Members

  • Abdul Fayaz
  • Adithyan CR
  • Aditya Narayan Patra
  • Ajay Viswanath V
  • Balakrishnan M
  • Bargunan Somasundaram
  • Bavya V
  • Bipin V
  • Champa N
  • Dharmeswaran P
  • Diamond Das
  • Inthazamuddin K
  • Kadhambari Manoharan
  • Kalpana Ashokan
  • Karthikeyan K
  • Mahaboobhee Mohamedfarook
  • Manju Vellaichamy
  • Manojkumar Rajendran
  • Masthan Rao Yenikapati
  • Nagarajan A
  • Neelagandan K
  • Nithil Raj Tharammal Paramb
  • Radhika M
  • Ramesh Jayachandar
  • Ramesh Natarajan
  • Ruban Salamon
  • Senthil Amarnath
  • T Mohammed Anas Aadil
  • Thulasi Ram G
  • Vijay Anand Shanmughadass
  • Vimalraj Subash

Center of Excellence – Database

Data Center as a Service Providers in USA

“During the World War II, there was a time when the Germans winning on every front and the fear of Hitler taking over the world was looming. At that point in time, had the Allies not taken drastic measures and invested in ground-breaking technologies such as radars, aircraft, atomic energy, etc., the world would have been starkly different from what it is today.

Even in today’s world, the pace at which things are changing is incredible. The evolution of technology is unstoppable, and companies must be ready. There is an inherent need for them to differentiate themselves by providing solutions that showcase a deep understanding of domain and technology to address evolving customer expectations. What becomes extremely important for companies is to establish themselves as incubators of innovation and possess the ability to constantly innovate and fail fast. Centers of Excellence can be an effective solution to address these challenges.

“An Organisation’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage”

  • Jack Welch, former Chairman and CEO of General Electric

The Database CoE was formed with a mission to groom, enhance and incubate talents within GAVS to stay abreast of the evolving technology landscape and help our customers with cutting edge technology solutions.

We identify the expert and the requirements across all customer engagements within GAVS. Regular connects and technology sessions ensure everyone in the CoE is learning at least one new topic in a week. Below is our charter and roadmap by priority:

Data Center Consolidation Initiative Services

Data Center Migration Planning Tools

Database CoE is focused on assisting our customers in every stage of the engagement right from on-boarding, planning, execution with consultative approach and a futuristic mindset. With above primary goals we are currently working on below initiatives:

Competency Building

When we help each other and stand together we evolve to be the strongest.

Continuous learning is an imperative in the current times. Our fast-paced trainings on project teams is an alternate to the primitive classroom sessions. We believe true learning happen when you are working on it hands-on. With this key aspect in mind, we divide the teams in smaller groups and map them to projects to get larger exposure and gain from experience.

This started off with a pilot with an ISP provider where we trained 4 CoE members in Azure and Power BI within a span of 2 months.

Desktop-as-a-Service (DaaS) Solution

Database Maturity Assessment

“When digital transformation is done right, it’s like a caterpillar turning into a butterfly “

  • George Westerman, research scientist at the MIT Center for Digital Business

Why Bother with a Database Assessment?

We often know we have a problem and can visualize the ideal state we want our technology solution to get us to.  However, it is challenging to figure out how to get there because it’s easy to confuse the symptoms with the cause of a problem. Thus, you end up solving the ‘symptom’ with a (potentially expensive) piece of technology that is ill-equipped to address the underlying cause.

We offer a structured process to assess your current database estate and select a technology solution helps you get around this problem, reduce risks and fast track the path to your true objective with futureproofing, by forcing you to both identify the right problem and solve it the right way.

Assessment Framework

Digital Service Desk AI Software

Below are the three key drivers powering the assessment.

Accelerated Assessment:

  • Automated assessment and benchmark of existing and new database estates against industry best practices and standards.
  • Analyze & Finetune
    • Analyze assessment findings and implement recommendations on performance, consistency, and security aspect
  • NOC+ZERO TOUCH L2
    • Shift Left and Automate L1/L2 Service requests and incidents with help of Database COE- Automation experts

As we progress on our journey, we want to establish ourselves as a catalyst to help our customers future-proof technology and help in early adoption of new solutions seamlessly.

If you have any questions about the CoE, you may reach out to them at COE_DATABASE@gavstech.com

CoE Team Members

  • Ashwin Kumar K
  • Ayesha Yasmin
  • Backiyalakshmi M
  • Dharmeswaran P
  • Gopinathan Sivasubramanian
  • Karthikeyan Rajasekaran
  • Lakshmi Kiran  
  • Manju Vellaichamy  
  • Manjunath Kadubayi  
  • Nagarajan A  
  • Nirosha Venkatesalu  
  • Praveen kumar Ralla  
  • Praveena M  
  • Rajesh Kumar Reddy Mannuru  
  • Satheesh Kumar K  
  • Sivagami R  
  • Subramanian Krishnan
  • Venkatesh Raghavendran

RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.

Observability versus Monitoring

Sri Chaganty

“Observability” has become a key trend in Service Reliability Engineering practice.  One of the recommendations from Gartner’s latest Market Guide for IT Infrastructure Monitoring Tools released in January 2020 says, “Contextualize data that ITIM tools collect from highly modular IT architectures by using AIOps to manage other sources, such as observability metrics from cloud-native monitoring tools.”

Like so many other terms in software engineering, ‘observability’ is a term borrowed from an older physical discipline: in this case, control systems engineering. Let me use the definition of observability from control theory in Wikipedia: “observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.”

Observability is gaining attention in the software world because of its effectiveness at enabling engineers to deliver excellent customer experiences with software despite the complexity of the modern digital enterprise.

When we blew up the monolith into many services, we lost the ability to step through our code with a debugger: it now hops the network.  Monitoring tools are still coming to grips with this seismic shift.

How is observability different than monitoring?

Monitoring requires you to know what you care about before you know you care about it. Observability allows you to understand your entire system and how it fits together, and then use that information to discover what specifically you should care about when it’s most important.

Monitoring requires you to already know what normal is. Observability allows discovery of different types of ‘normal’ by looking at how the system behaves, over time, in different circumstances.

Monitoring asks the same questions over and over again. Is the CPU usage under 80%? Is memory usage under 75% percent? Or, is the latency under 500ms? This is valuable information, but monitoring is useful for known problems.

Observability, on the other side, is about asking different questions almost all the time. You discover new things.

Observability allows the discovery of different types of ‘normal’ by looking at behavior, over time, in different circumstances.

Metrics do not equal observability.

What Questions Can Observability Answer?

Below are sample questions that can be addressed by an effective observability solution:

  • Why is x broken?
  • What services does my service depend on — and what services are dependent on my service?
  • Why has performance degraded over the past quarter?
  • What changed? Why?
  • What logs should we look at right now?
  • What is system performance like for our most important customers?”
  • What SLO should we set?
  • Are we out of SLO?
  • What did my service look like at time point x?
  • What was the relationship between my service and x at time point y?
  • What was the relationship of attributed across the system before we deployed? What’s it like now?
  • What is most likely contributing to latency right now? What is most likely not?
  • Are these performance optimizations on the critical path?

About the Author –

Sri is a Serial Entrepreneur with over 30 years’ experience delivering creative, client-centric, value-driven solutions for bootstrapped and venture-backed startups.