Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Artificial Intelligence in Healthcare

Dr. Ramjan Shaik

Scientific progress is about many small advancements and occasional big leaps. Medicine is no exception. In a time of rapid healthcare transformation, health organizations must quickly adapt to evolving technologies, regulations, and consumer demands. Since the inception of electronic health record (EHR) systems, volumes of patient data have been collected, creating an atmosphere suitable for translating data into actionable intelligence. The growing field of artificial intelligence (AI) has created new technology that can handle large data sets, solving complex problems that previously required human intelligence. AI integrates these data sources to develop new insights on individual health and public health.

Highly valuable information can sometimes get lost amongst trillions of data points, costing the industry around $100 billion a year. Providers must ensure that patient privacy is protected, and consider ways to find a balance between costs and potential benefits. The continued emphasis on cost, quality, and care outcomes will perpetuate the advancement of AI technology to realize additional adoption and value across healthcare. Although most organizations utilize structured data for analysis, valuable patient information is often “trapped” in an unstructured format. This type of data includes physician and patient notes, e-mails, and audio voice dictations. Unstructured data is frequently richer and more multifaceted. It may be more difficult to navigate, but unstructured data can lead to a plethora of new insights. Using AI to convert unstructured data to structured data enables healthcare providers to leverage automation and technology to enhance processes, reduce the staff required to monitor patients while filling gaps in healthcare labor shortages, lower operational costs, improve patient care, and monitor the AI system for challenges.

AI is playing a significant role in medical imaging and clinical practice. Providers and healthcare organizations have recognized the importance of AI and are tapping into intelligence tools. Growth in the AI health market is expected to reach $6.6 billion by 2021 and to exceed $10 billion by 2024.  AI offers the industry incredible potential to learn from past encounters and make better decisions in the future. Algorithms could standardize tests, prescriptions, and even procedures across the healthcare system, being kept up-to-date with the latest guidelines in the same way a phone’s operating system updates itself from time to time.

There are three main areas where AI efforts are being invested in the healthcare sector.

  • Engagement – This involves improvising on how patients interact with healthcare providers and systems.
  • Digitization – AI and other digital tools are expected to make operations more seamless and cost-effective.
  • Diagnostics – By using products and services that use AI algorithms diagnosis and patient care can be improved.

AI will be most beneficial in three other areas namely physician’s clinical judgment and diagnosis, AI-assisted robotic surgery, and virtual nursing assistants.

Following are some of the scenarios where AI makes a significant impact in healthcare:

  • AI can be utilized to provide personalized and interactive healthcare, including anytime face-to-face appointments with doctors. AI-powered chatbots can be powered with technology to review the patient symptoms and recommend whether a virtual consultation or a face-to-face visit with a healthcare professional is necessary.
  • AI can enhance the efficiency of hospitals and clinics in managing patient data, clinical history, and payment information by using predictive analytics. Hospitals are using AI to gather information on trillions of administrative and health record data points to streamline the patient experience. This collaboration of AI and data helps hospitals/clinics to personalize healthcare plans on an individual basis.
  • A taskforce augmented with artificial intelligence can quickly prioritize hospital activity for the benefit of all patients. Such projects can improve hospital admission and discharge procedures, bringing about enhanced patient experience.
  • Companies can use algorithms to scrutinize huge clinical and molecular data to personalize healthcare treatments by developing AI tools that collect and analyze data from genetic sequencing to image recognition empowering physicians in improved patient care. AI-powered image analysis helps in connecting data points that support cancer discovery and treatment.
  • Big data and artificial intelligence can be used in combination to predict clinical, financial, and operational risks by taking data from all the existing sources. AI analyzes data throughout a healthcare system to mine, automate, and predict processes. It can be used to predict ICU transfers, improve clinical workflows, and even pinpoint a patient’s risk of hospital-acquired infections. Using artificial intelligence to mine health data, hospitals can predict and detect sepsis, which ultimately reduces death rates.
  • AI helps healthcare professionals harness their data to optimize hospital efficiency, better engage with patients, and improve treatment. AI can notify doctors when a patient’s health deteriorates and can even help in the diagnosis of ailments by combing its massive dataset for comparable symptoms. By collecting symptoms of a patient and inputting them into the AI platform, doctors can diagnose quickly and more effectively.   
  • Robot-assisted surgeries ranging from minimally-invasive procedures to open-heart surgeries enables doctors to perform procedures with precision, flexibility, and control that goes beyond human capabilities, leading to fewer surgery-related complications, less pain, and a quicker recovery time. Robots can be developed to improve endoscopies by employing the latest AI techniques which helps doctors get a clearer view of a patient’s illness from both a physical and data perspective.

Having understood the advancements of AI in various facets of healthcare, it is to be realized that AI is not yet ready to fully interpret a patient’s nuanced response to a question, nor is it ready to replace examining patients – but it is efficient in making differential diagnoses from clinical results. It is to be understood very clearly that the role of AI in healthcare is to supplement and enhance human judgment, not to replace physicians and staff.

We at GAVS Technologies are fully equipped with cutting edge AI technology, skills, facilities, and manpower to make a difference in healthcare.

Following are the ongoing and in-pipeline projects that we are working on in healthcare:

ONGOING PROJECT:

AI Devops Automation Service Tools

PROJECTS IN PIPELINE:

AIOps Artificial Intelligence for IT Operations
AIOps Digital Transformation Solutions
Best AI Auto Discovery Tools
Best AIOps Platforms Software

Following are the projects that are being planned:

  • Controlling Alcohol Abuse
  • Management of Opioid Addiction
  • Pharmacy Support – drug monitoring and interactions
  • Reducing medication errors in hospitals
  • Patient Risk Scorecard
  • Patient Wellness – Chronic Disease management and monitoring

In conclusion, it is evident that the Advent of AI in the healthcare domain has shown a tremendous impact on patient treatment and care. For more information on how our AI-led solutions and services can help your healthcare enterprise, please reach out to us here.

About the Author –

Dr. Ramjan is a Data Analyst at GAVS. He has a Doctorate degree in the field of Pharmacy. He is passionate about drawing insights out of raw data and considers himself to be a ‘Data Person’.

He loves what he does and tries to make the most of his work. He is always learning something new from programming, data analytics, data visualization to ML, AI, and more.

Center of Excellence – Big Data

The Big Data CoE is a team of experts that experiments and builds various cutting-edge solutions by leveraging the latest technologies, like Hadoop, Spark, Tensor-flow, and emerging open-source technologies, to deliver robust business results. A CoE is where organizations identify new technologies, learn new skills, and develop appropriate processes that are then deployed into the business to accelerate adoption.

Leveraging data to drive competitive advantage has shifted from being an option to a requirement for hyper competitive business landscape. One of the main objectives of the CoE is deciding on the right strategy for the organization to become data-driven and benefit from a world of Big Data, Analytics, Machine Learning and the Internet of Things (IoT).

Cloud Migration Assessment Tool for Business
Triple Constraints of Projects

“According to Chaos Report, 52% of the projects are either delivered late or run over the allocated. The average across all companies is 189% of the original cost estimate. The average cost overrun is 178% for large companies, 182% for medium companies, and 214% for small companies. The average overrun is 222% of the original time estimate. For large companies, the average is 230%; for medium companies, the average is 202%; and for small companies, the average is 239%.”

Big Data CoE plays a vital role in bringing down the cost and reducing the response time to ensure project is delivered on time by helping the organization to build the skillful resources.

Big Data’s Role

Helping the organization to build quality big data applications on their own by maximizing their ability to leverage data. Data engineers are committed to helping ensure the data:

  • define your strategic data assets and data audience
  • gather the required data and put in place new collection methods
  • get the most from predictive analytics and machine learning
  • have the right technology, data infrastructure, and key data competencies
  • ensure you have an effective security and governance system in place to avoid huge financial, legal, and reputational problems.
Cyber Security and Compliance Services

Data Analytics Stages

Architecture optimized building blocks covering all data analytics stages: data acquisition from a data source, preprocessing, transformation, data mining, modeling, validation, and decision making.

Cyber Security Mdr Services

Focus areas

Algorithms support the following computation modes:

  • Batch processing
  • Online processing
  • Distributed processing
  • Stream processing

The Big Data analytics lifecycle can be divided into the following nine stages:

  • Business Case Evaluation
  • Data Identification
  • Data Acquisition & Filtering
  • Data Extraction
  • Data Validation & Cleansing
  • Data Aggregation & Representation
  • Data Analysis
  • Data Visualization
  • Utilization of Analysis Results

A key focus of Big-data CoE is to establish a data-driven organization by developing proof of concept with the latest technologies with Big Data and Machine learning models. As of part of CoE initiatives, we are involved in developing the AI widgets to various market places, such as Azure, AWS, Magento and others. We are also actively involved in engaging and motivating the team to learn cutting edge technologies and tools like Apache Spark and Scala. We encourage the team to approach each problem in a pragmatic way by making them understand the latest architectural patterns over the traditional MVC methods.

It has been established that business-critical decisions supported by data-driven insights have been more successful. We aim to take our organization forward by unleashing the true potential of data!

If you have any questions about the CoE, you may reach out to them at SME_BIGDATA@gavstech.com

CoE Team Members

  • Abdul Fayaz
  • Adithyan CR
  • Aditya Narayan Patra
  • Ajay Viswanath V
  • Balakrishnan M
  • Bargunan Somasundaram
  • Bavya V
  • Bipin V
  • Champa N
  • Dharmeswaran P
  • Diamond Das
  • Inthazamuddin K
  • Kadhambari Manoharan
  • Kalpana Ashokan
  • Karthikeyan K
  • Mahaboobhee Mohamedfarook
  • Manju Vellaichamy
  • Manojkumar Rajendran
  • Masthan Rao Yenikapati
  • Nagarajan A
  • Neelagandan K
  • Nithil Raj Tharammal Paramb
  • Radhika M
  • Ramesh Jayachandar
  • Ramesh Natarajan
  • Ruban Salamon
  • Senthil Amarnath
  • T Mohammed Anas Aadil
  • Thulasi Ram G
  • Vijay Anand Shanmughadass
  • Vimalraj Subash

Center of Excellence – .Net

Best Cyber Security Services Companies

“Maximizing the quality, efficiency, and reusability by providing innovative technical solutions, creating intellectual capital, inculcating best practices and processes to instill greater trust and provide incremental value to the Stakeholders.”

With the above mission,we have embarked on our journey to establish and strengthen the .NET Center of excellence (CoE).

“The only way to do great work is to love what you do.” – Steve Jobs

Expertise in this CoE is drawn from top talent across all customer engagements within GAVS. Team engagement is maintained at a very high level with various connects such as regular technology sessions, advanced trainings for CoE members from MS, support and guidance for becoming a MS MVP. Members also socialize new trending articles, tools, whitepapers and blogs within the CoE team and MS Teams channels setup for collaboration. All communications from MS Premier Communications sent to Gold Partners is also shared within the group. The high-level roadmap as planned for this group is laid out below.

Best DCaas Providers in USA
<!–td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}–>
Best DCaas Providers in USA<!–td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}–>
Best DCaas Providers in USA

The .NET CoEfocused on assistingourcustomers in every stage of theengagement right from on-boarding, planning, execution, technical implementation and finally all the way to launching and growing. Our prescriptive approach is to leverage industry-proven best practices, solutions, reusable components and include robust resources, training, and making a vibrant partner community.

With the above as the primary goal in mind the CoE group is currently engaged inor planning the following initiatives.

Technology Maturity Assessment

One of the main objectivesof this group is to provide constant feedback to all .NET stack project for improvement and improvisation. The goal for this initiative is to build the technology maturity index for all projects for the below parameters.

Best Virtual Desktop Infrastructure Software

Using those approaches within a short span of time we were able to make a significant impact for some of our engagements.

Client – Online Chain Store: Identified cheaper cloud hosting option for application UI.

Benefits: Huge cost and time savings.

Client – Health care sector: Provided alternate solution for DB migrations from DEV to various environments.

Benefits: Huge cost savings due to licensing annually.

Competency Building

“Anyone who stops learning is old, whether at twenty or eighty.” – Henry Ford

Continuous learning and upskilling are the new norms in today’s fast changing technology landscape. This initiative is focused on providing learning and upskilling support to all technology teams in GAVS. Identifying code mentors, supporting team members to become full stack developers are some of the activities planned under this initiative.  Working along with the Learning & Development team,the .NET CoE isformulating different training tracks to upskill the team members and provide support for external assessments and MS certifications.

Solution Accelerators

“Good, better, best. Never let it rest. ‘Till your good is better and your better is best.” – St. Jerome

The primary determinants of CoE effectiveness are involvement in solutions and accelerators and in maintaining standard practices of the relevant technologies across customer engagements across the organization.

As part of this initiative we are focusing on building project templates, DevOps pipelines and automated testing templates for different technology stacks for both Serverless and Server Hosted scenarios. We also are planning similar activities for the Desktop/Mobile Stack with the Multi-Platform App UI (MAUI) framework which is planned to be released for Preview in Q4 2020.

Blockchain Solution and Services

Additionally, we are also adoptingless-code, no-code development platforms for accelerated development cycles for specific use-cases.

As we progress on our journey to strengthen the .NET CoE, we want to act as acatalyst in rapid and early adoption of new technology solutions and work as trusted partners with all our customer and stakeholders.

If you have any questions about the CoE, you may reach out to them at COE_DOTNET@gavstech.com

CoE Team Members

  • Bismillakhan Mohammed
  • Gokul Bose
  • Kirubakaran Girijanandan
  • Neeraj Kumar
  • Prasad D
  • Ramakrishnan S
  • SaphalMalol
  • Saravanan Swaminathan
  • SenthilkumarKamayaswami
  • Sethuraman Varadhan
  • Srinivasan Radhakrishnan
  • Thaufeeq Ahmed
  • Thomas T
  • Vijay Mahalingam

RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.

Hyperautomation

Machine learning service provider

Bindu Vijayan

According to Gartner, “Hyper-automation refers to an approach in which organizations rapidly identify and automate as many business processes as possible. It involves the use of a combination of technology tools, including but not limited to machine learning, packaged software and automation tools to deliver work”.  Hyper-automation is to be among the year’s top 10 technologies, according to them.

It is expected that by 2024, organizations will be able to lower their operational costs by 30% by combining hyper-automation technologies with redesigned operational processes. According to Coherent Market Insights, “Hyper Automation Market will Surpass US$ 23.7 Billion by the end of 2027.  The global hyper automation market was valued at US$ 4.2 Billion in 2017 and is expected to exhibit a CAGR of 18.9% over the forecast period (2019-2027).”

How it works

To put it simply, hyper-automation uses AI to dramatically enhance automation technologies to augment human capabilities. Given the spectrum of tools it uses like Robotic Process Automation (RPA), Machine Learning (ML), and Artificial Intelligence (AI), all functioning in sync to automate complex business processes, even those that once called for inputs from SMEs,  implies this is a powerful tool for organisations in their digital transformation journey.

Hyperautomation allows for robotic intelligence into the traditional automation process, and enhances the completion of processes to make it more efficient, faster and errorless.  Combining AI tools with RPA, the technology can automate almost any repetitive task; it automates the automation by identifying business processes and creates bots to automate them. It calls for different technologies to be leveraged, and that means the businesses investing in it should have the right tools, and the tools should be interoperable. The main feature of hyperautomation is, it merges several forms of automation and works seamlessly together, and so a hyperautomation strategy can consist of RPA, AI, Advanced Analytics, Intelligent Business Management and so on. With RPA, bots are programmed to get into software, manipulate data and respond to prompts. RPA can be as complex as handling multiple systems through several transactions, or as simple as copying information from applications. Combine that with the concept of Process Automation or Business Process Automation which enables the management of processes across systems, it can help streamline processes to increase business performance.    The tool or the platform should be easy to use and importantly scalable; investing in a platform that can integrate with the existing systems is crucial. The selection of the right tools is what  Gartner calls “architecting for hyperautomation.”

Impact of hyperautomation

Hyperautomation has a huge potential for impacting the speed of digital transformation for businesses, given that it automates complex work which is usually dependent on inputs from humans. With the work moved to intelligent digital workers (RPA with AI) that can perform repetitive tasks endlessly, human performance is augmented. These digital workers can then become real game-changers with their efficiency and capability to connect to multiple business applications, discover processes, work with voluminous data, and analyse in order to arrive at decisions for further / new automation.

The impact of being able to leverage previously inaccessible data and processes and automating them often results in the creation of a digital twin of the organization (DTO); virtual models of every physical asset and process in an organization.  Sensors and other devices monitor digital twins to gather vital information on their condition, and insights are gathered regarding their health and performance. As with data, the more data there is, the systems get smarter with it, and are able to provide sharp insights that can thwart problems, help businesses make informed decisions on new services/products, and in general make informed assessments. Having a DTO throws light on the hitherto unknown interactions between functions and processes, and how they can drive value and business opportunities.  That’s powerful – you get to see the business outcome it brings in as it happens or the negative effect it causes, that sort of intelligence within the organization is a powerful tool to make very informed decisions.

Hyperautomation is the future, an unavoidable market state

hyperautomation is an unavoidable market state in which organizations must rapidly identify and automate all possible business processes.” – Gartner

It is interesting to note that some companies are coming up with no-code automation. Creating tools that can be easily used even by those who cannot read or write code can be a major advantage – It can, for e.g., if employees are able to automate the multiple processes that they are responsible for, hyperautomation can help get more done at a much faster pace, sparing time for them to get involved in planning and strategy.  This brings more flexibility and agility within teams, as automation can be managed by the teams for the processes that they are involved in.

Conclusion

With hyperautomation, it would be easy for companies to actually see the ROI they are realizing from the amount of processes that have been automated, with clear visibility on the time and money saved. Hyperautomation enables seamless communication between different data systems, to provide organizations flexibility and digital agility. Businesses enjoy the advantages of increased productivity, quality output, greater compliance, better insights, advanced analytics, and of course automated processes. It allows machines to have real insights on business processes and understand them to make significant improvements.

“Organizations need the ability to reconfigure operations and supporting processes in response to evolving needs and competitive threats in the market. A hyperautomated future state can only be achieved through hyper agile working practices and tools.”  – Gartner

References:

Assess Your Organization’s Maturity in Adopting AIOps

IT operations analytics

Anoop Aravindakshan

Artificial Intelligence for IT operations (AIOps) is adopted by organizations to deliver tangible Business Outcomes. These business outcomes have a direct impact on companies’ revenue and customer satisfaction.

A survey from AIOps Exchange 2019, reports that 84% of business owners who attended the survey, confirmed that they are actively evaluating AIOps to be adopted in their organizations.

So, is AIOps just automation? Absolutely NOT!

Artificial Intelligence for IT operations implies the implementation of true Autonomous Artificial Intelligence in ITOps, which needs to be adopted as an organization-wide strategy. Organizations will have to assess their existing landscape, processes, and decide where to start. That is the only way to achieve the true implementation of AIOps.

Every organization trying to evaluate AIOps as a strategy should read through this article to understand their current maturity, and then move forward to reach the pinnacle of Artificial Intelligence in IT Operations.

The primary success factor in adopting AIOps is derived from the Business Outcomes the organization is trying to achieve by implementing AIOps – that is the only way to calculate ROI.

There are 4 levels of Maturity in AIOps adoption. Based on our experience in developing an AIOps platform and implementing the platform across multiple industries, we have arrived at these 4 levels. Assessing an organization against each of these levels, helps in achieving the goal of TRUE Artificial Intelligence in IT Operations.

Level 1: Knee-jerk

Events, logs are generated in silos and collected from various applications and devices in the infrastructure. These are used to generate alerts that are commissioned to command centres to escalate as per the SOPs (standard operating procedures) defined. The engineering teams work in silos, not aware of the business impact that these alerts could potentially create. Here, operations are very reactive which could cost the organization millions of dollars.

Level 2: Unified

All events, logs, and alerts are integrated into one central locale. ITSM processes are unified. This helps in breaking silos and engineering teams are better prepared to tackle business impacts. SOPs have been adjusted since the process is unified, but this is still reactive incident management.

Level 3: Intelligent

Machine Learning algorithms (either supervised or unsupervised) have been implemented on the unified data to derive insights. There are baseline metrics that are calibrated and will be used as a reference for future events. With more data, the metrics get richer. IT operations team can correlate incidents / events with business impacts by leveraging AI & ML. If Mean-Time-To-Resolve (MTTR) an incident has been reduced by automated identification of the root cause, then the organization has attained level 3 maturity in AIOps.

Level 4: Predictive & Autonomous

The pinnacle of AIOps is level 4. If incidents and performance degradation of applications can be predicted by leveraging Artificial Intelligence, it implies improved application availability. Autonomous remediation bots can be triggered spontaneously based on the predictive insights, to fix incidents that are prone to happen in the enterprise. Level 4 is a paradigm shift in IT operations – moving operations entirely from being reactive, to becoming proactive.

Conclusion

As IT operations teams move up each level, the essential goal to keep in mind is the long-term strategy that needs to be attained by adopting AIOps. Artificial Intelligence has matured over the past few decades, and it is up to AIOps platforms to embrace it effectively. While choosing an AIOps platform, measure the maturity of the platform’s artificial intelligent coefficient.

About the Author:

An evangelist of Zero Incident FrameworkTM, Anoop has been a part of the product engineering team for long and has recently forayed into product marketing. He has over 14 years of experience in Information Technology across various verticals, which include Banking, Healthcare, Aerospace, Manufacturing, CRM, Gaming and Mobile.

Combating a health crisis with digital health technologies

Bindu Vijayan

The current pandemic has exposed yawning gaps in the systems of the best of developed countries to be able to respond to virulent pathogens.  The world has seen SARS and Ebola in fairly recent times, and with the COVID 19 pandemic, it is becoming clear that technology can help combat and overcome future epidemics if we plan and strategize with these technologies.  They bring efficiency to our response times, and we are currently learning the importance of using these technologies for prevention as well.  A small example – Canadian AI health monitoring platform BlueDot’s outbreak risk software is said to have predicted the outbreak of the pandemic a whole week before America (who announced on Jan 8), and the WHO (on Jan 9) did. BlueDot predicted the spread of COVID 19 from Wuhan to other countries like Bangkok and Seoul by parsing through huge volumes of international news (in local languages).  It further was able to predict where the infection would spread by accessing global airline data to trace and track where the infected people were headed.

Contrary to earlier times, today it only takes a few hours to sequence a virus, thanks of course, to technology.  The scientists don’t have to cultivate a sufficient batch of viruses any longer in order to examine them, today, its DNA can be got from an infected person’s blood sample or saliva.  India’s National Institute of Animal Biotechnology (NIAB), Hyderabad, has developed a biosensor that can detect the novel coronavirus in saliva samples. The new portable device called ‘eCovSens’, can detect coronavirus antigens in human saliva within 30 seconds using just 20 microlitres of sample.  Startups like Canadian GenMarkDx, US-based Aperiomics & XCR Diagnostics, Singapore based MiRXES, and Polish company’s SensDx have introduced top notch diagnostic solutions.  Identifying infected people to provide strict medical care will be made a lot faster with these diagnostic kits. 

Genome sequencing is also vital to fight the pandemic.  The genome of this virus was completely sequenced by the Chinese scientists in under a month from detection of the first case, and then on the biotech companies created synthetic copies of the virus for research.  Today creating a synthetic copy of a single nucleotide costs under 10 cents (in comparison to the earlier $ 10), so these days it is far quicker and cheaper, which means the chances of finding appropriate / adequate medication are much faster which will help save more lives.

Healthcare workers are having to pay a huge price, they run the risk of getting infected, there is often paucity of PPE, and in some countries, they even have to face assault from crowds that are angry and confused at the situation.  Medical workers are targetted by mobs, there are instances where communities don’t allow them to come back to their homes after duty, shops don’t sell them necessities, etc.  Medical robots can be the real game-changers in such situations.  Deploying robots in such scenarios to do the rescue is becoming a much sought after option, wherever possible.   Robots become the answer to such difficult situations as they are impervious to infections.  They allow physicians to treat/communicate through a screen. The patient’s vitals are also recorded by the robot.  Patients can be very efficiently monitored this way.

Drones for deliveries, especially medical deliveries can also be used to reach isolation zones or quarantined zones.  Italy made a big success out of this. Italy’s coronavirus epicenter, Bergamo, in Lombardy region, had to resort to people’s temperature being read by drones.  ‘The Star’ reported that “once a person’s temperature is read by the drone, you must still stop that person and measure their temperature with a normal thermometer,” said Matteo Copia, a police commander in Treviolo, near Bergamo. Drones are being used for surveillance – In areas where people were not complying with social distancing and lockdown restrictions, authorities are using drones to monitor people’s movement and break up social gatherings that could be a potential risk to the society. Drones are also being used for Disinfectant spraying, broadcasting messages, medicine and grocery deliveries and so on.

Interactive maps give us the data on the pandemic on real time, and monitoring a pandemic this wide and dangerous is very crucial to stopping/controlling its spread. These maps are made available to everybody, and the truth and transparency in the situation of such epic proportion is necessary in order to avoid panic within communities.  We now have apps for tracking the virus spread, fatalities and recovery rates, and apps would be developed for the future that will warn us about impending outbreaks, the geographies and flight routes that we must avoid

Implementing these technologies will enable us to manage and conquer situations like the current pandemic we are going through. As Bernardo Mariano Junior, Director of WHO’s Department of Digital Health and Innovation, rightly said “The world needs to be well prepared and united in the spirit of shared responsibility, to digitally detect, protect, respond, and prepare the recovery for COVID 19. No single entity or single country initiative will be sufficient. We need everyone.”

References:

An unprecedented crisis and its unprecedented opportunities

Machine learning service provider

Bindu Vijayan

We will never forget these times, most of us, the regular, morning-news addicts, switch on our TVs, hoping to see declining numbers in the coronavirus infected list.  Country to country, we go feverishly through the good news that we are finally seeing, with the curve flattening. There is a lot of fear and trepidation as to how we will pick up and reintroduce our ways of living and working. Even as we are experiencing just how effective it is to be working from home, it is but natural that companies will resume regular ways of working – back to the office (do we really need to continue paying the real-estate gods as much?), resume travel (do we need to, when virtual meetings were working so perfect?) as soon as the travel embargoes are lifted, it would soon be back to business, all of us more determined than ever, the whole world is raring to go.

Clear communication, as often as it takes, would be the backbone of the new disruptive work practices as these practices will leave employees with some degree of confusion/unrest, particularly in the threat of the current recession. Our lives have been disrupted in every way under the COVID 19 threat, and it is very important that employee morale is high.  It is important for Managers to address employee concerns with sensitivity, everyone is going to have questions on the future of the company, the business, and if their roles are going to be seeing changes. Employees must be told about the changes that are going to be affected, the precautions that are being taken, and also taught/ guided how to function best under these circumstances. If someone has recovered from COVID 19, support him/her without stigma and discrimination. Maintaining employee morale through various activities during these times will bring the much-required boost – plan on virtual awards and recognitions, do as much online as possibly can. And let the communication and interaction be two way – find out the office sentiment, how employees are feeling and make adjustments and improvements accordingly, and communicate constantly.

Going back to our offices after this crisis requires renewed vigilance, given the nature of the coronavirus. Resuming work at the office premises would mean having the whole bunch of employees back, which in itself is a very tricky situation – from social distancing back to human density – it is very important that workplaces are maintained in high levels of hygiene. COVID 19 established the fact that there is definite risk in crowds, and for companies to plan to have employees back at their premises imply a deeper than ever responsibility to workplace hygiene and health. Managing the numbers at our workplace is going to be critical if we are to keep safe from the threat of another attack by COVID 19. Hygiene and cleaning processes need to be increased to its maximum capacity across the workplaces and common areas. Surfaces (e.g. desks and tables) and objects need to be wiped with disinfectant regularly. Alcohol based hand rub dispensers should be maintained at prominent places across the facility. Keep promoting hand-washing through posters and monitors across the facility so that it is a constant reminder for employees to take precautions.

Having to be careful with numbers would require companies to redesign workplaces to have employees coming back. Even though it might not be entirely viable, it can be a mix of having employees continue work from home in rotation, perhaps every week, or whatever works best for the functions, while others work out of the office in redesigned (read larger, increased physical distances) workspaces.  Allocating more space to employees can be achieved only through rotation shifts in order to support social distancing for the rest of the required period as per WHO / local health authority guidelines.  Plan work schedule charts for the various functions to work out of their offices, and maintaining strict schedules will not only decrease the risk of infection but also help employees to plan better, as well as ease anxieties and confusion.

To make the best out of the situation, let’s take the opportunity to accept this as a huge learning time – rethink on travel, travel only if it is really necessary and save money, it can be diverted into more important areas. Promote collaboration across geos, virtual meetings have been a big success during this time, and lets continue to collaborate not just for work and meetings but also to have online employee events across geos. If anything, using more online meetings due to the situation has only brought about an increased sense of camaraderie.  We have seen our colleagues in New York city working at the BronxCare, helping patients in ICU, working alongside the medical staff, and it has been a proud moment for every GAVSian across the world to celebrate them, GAVS’ heroes.

And lastly, as we leave this traumatic time behind us, let’s be careful to ensure that we don’t have to go through the situation again.  Follow WHO guidelines to take control measures that focus on prevention and on active surveillance for early detection and treatment.  The opportunities that this pandemic has shown us are multitude – Newspapers report “our planet is detoxing as humans stay lockdown” – Lower carbon emissions are reported.  Rob Jackson, a professor of Earth system science at Standford University says that carbon output could fall by more than 5% this year, the first dip since a 1.4% reduction after the 2008 financial crisis. The air is cleaner and it is quieter too. Decibel readings at a busy intersection in India were 90 pre-pandemic but it recently measured at just 68, reports Boston University. Water quality is reported to have improved across the globe – from Venice, famous for its canals, its waterways are benefiting from the lack of usual boat traffic brought on by thousands of visitors. The wildlife that usually shies away from humans is seen in abundance, be it the Ridley turtles in the beaches of Orissa, India, to the otters in Singapore, to the whales and deer in Japan, to the orcas in North America.  There is so much of the natural world that is suddenly thriving when we gave it a little space….

This has been a time of unprecedented learning opportunities even as our lives got turned upside down. But true as human spirits go, here is something remarkable I read on Linkedin; it reflects hope, positivity, and genuine empathy – here is an excerpt from a post by Dr. Joerg Storm “Next year, I don’t want to hear about the Oscars, Grammys, Tonys or Golden Globes….. I want to see nurses, doctors, ambulance crews, firefighters, healthcare support workers, delivery guys, shop workers, truck drivers, grocery store workers, and all other essential workers get free red carpet parties with awards and expensive goodie bags. “

Virtual desktops on a meteoric rise

Bindu Vijayan

For its versatility in supporting mobile workforces, to security, to energy efficiency, virtual desktops have been on a meteoric rise in recent times. The technology brings benefits that makes it near mandatory, given the way businesses are having to function with the raging pandemic gripping the world.

Wikipedia defines VDI as “Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it….In this mode, all the components of the desktop are virtualized, which allows for a highly flexible and much more secure desktop delivery model.”

Financial impact – With VDI, you have the freedom to reallocate huge CAPEX IT investments into other areas according to your business demands. The technology helps you save on those high cost capital expenditures like expensive servers. It is a viable and sound proposition as hardware issues and related problems are dealt with on the main server vs several individual machines. It takes time to provision new desktops and laptops, and VDI takes the headache out with simplified management. The money invested in server hardware becomes an onetime cost as against investing in several desktops.  It also reduces the admin and support costs.  There are no expensive installations in spite of remote working and the maintenance / software upgrades and so on that usually takes up lots of time when done on several individual machines can now be brought down to a minimum with it being centralized. It helps save manhours. With VDI using the storage capability and computing power of the data center, individual devices require less RAM and less storage space, and that points towards less expensive machines with high performance.  It helps to budget hardware investments more appropriately and reallocate to other areas.

Security – Today, security has become a serious concern for businesses.  With VDI, tracking external devices becomes much more manageable. Using a central database with centralised storage makes it more secure. No individual device holds / stores data and thus the company data is more secure and under central supervision. You don’t risk having files in various devices, and there is the added advantage of not losing data if anything happens to individual devices or desktops.  Right from individual files to the various applications that are installed, it is easier to manage them centrally and thus avoid all the time engineers usually take to locate individual problems. This sort of centralized troubleshooting helps maintain lean IT operations.

With centralizing the updating process for an organization with different people using different types of devices, right from desktops to laptops to tabs and high tech phones, the chances are that they are going to be on different operating systems, and VDI centralizes the process. And given the unprecedented scenario we are in, disaster recovery is crucial.  When data is stored centrally, it can be accessed anywhere with minimal downtime.

Energy efficient – VDI also comes with the advantage of being energy efficient.  It uses much less electricity than individual desktop computers.  Becoming energy efficient is not something that is aspirational any longer – businesses commit to reducing their carbon footprints to do their bit to saving our planet.

To sum up, the efficiency and the versatility of VDI makes it possible for employees to work from anywhere subject to internet connectivity, regardless of their location, type of device, situation, time of the day, etc.  Employees are happier when they feel they have the flexibility to work anywhere, anytime, a fact that is especially true with the millennial workforce. It is the current day need and the answer to businesses with most of the world having to work from home to battle the pandemic.  It means employees have the flexibility to work from anywhere without compromising security, with complete control over budgets as there is no need for purchasing additional devices.  It increases productivity, with the IT department taking care of the deploying applications etc., while the users are left relatively free to manage and focus on their work. Plus all the heavy lifting that is usually associated with computing gets done by the remote servers where data and programs are centralized, which has applications perform with the speed and efficiency.  This reduced lag time coupled with increased computing powers sums up to huge productivity gains.

Are you ready for the switch to virtual desktops?

GAVS’ zDesk will do the job for you, request for a demo, write to inquiry@gavstech.com

zDesk combines the benefits of VDI and Desktop as a Service (DaaS), and can be hosted either on-premise or on the cloud service of your choice. The zDesk stack is a complete, holistic solution that sits on top of brownfield” customer infrastructure. The zDesk Enterprise Service Bus collects and distributes logs, security threats, user profiles, guests, hardware inventory and KPIs. zDesk provides storage optimization, including compression and deduplication, which reduces storage costs and upkeep. Persistent storage is in the form of local disks and replicated databases. Additional services such as brokering and monitoring, help ease the process of delivering the desktop to the end-user, and reduce incidents.

Key benefits you get from zDesk;

  • Save on 90% of utility bills
  • 40% on desktop investment
  • 40% on software licensing costs
  • 80% of support costs
  • Achieve 0% data loss
  • 90% faster deployment
  • 100% secured endpoints
  • Reduce 70% of IT incidents