API Security

Logaiswar S

“An unsecured API is literally an ‘all you can eat buffet’ for hackers.”

What is API security?

API security is the protection of network-exposed APIs that an organization, both owns and uses. APIs are becoming the preferred method to develop new-age applications. They are one of most common ways to interact between microservices and containers like systems and apps. API are developed using REST or SOAP methods. However, the true strength of API security depends on how there are implemented.

Master Data Management Software Tools

REST API Security Vs SOAP API Security

REST APIs use HTTP and Support Transport Layer Security Encryption (TLS). It is a standard that makes the connection private and checks whether the data transferred between the two systems (client and server) is encrypted. REST API is faster than SOAP because of the statelessness of nature. REST API doesn’t need to store or repackage data.

SOAP APIs use built protocols known as Web services. These protocols are defined using a rule set that is guided by confidentiality and authentication. SOAP API has not been around for as long as REST API. SOAP API is more secure than REST API as it uses Web security for transmission long with SSL.

Why is API security important?

Organizations use API to connect services and transferred data. The major data breaches through API are broken, exposed, or hacked APIs. The way API security is used depends on what kind of data is transferred.

Security testing of APIs is currently a challenge for 35% of organizations, that need better capabilities than what current DAST and SAST technologies offer to automatically discover APIs and conduct testing. Organizations are moving from monolithic web applications to modern applications such as those that make heavy use of client-side JavaScript or ones that utilize microservices architecture.

How API Security works?

API security depends on authentication and authorization. Authentication is the first step; it is used to verify that the client application has the required permission to use API. Authorization is the subsequent step that determines what data and action an authentication application can access while interacting with API.

APIs should be developed with protective features to reduce the system’s vulnerability to malicious attacks during API calls.

The developer is responsible for ensuring the developed API successfully validates all the input collected from the user during API calls. The prepared statements with blind variables are one of the most effective ways to prevent API from SQL injection. XSS can be easily handled by cleaning the user input from the API call. Cleaning the inputs helps to ensure that potential XSS vulnerabilities are minimized.   

Best Practice for Secure API

Some basic security practice and well-established security control if the APIs are shared publicly are as follows:

  • Prioritize security: Potential loss for the organization happens using unsecured APIs, so make security a priority and build the API securely as they are being developed.
  • Encrypt traffic using TLS: Some organizations may choose not to encrypt API payload data that is considered to be non-sensitive, but for organizations whose API exchange sensitive data, TLS encryption should be essential.
  • Validate input: Never pass input from an API through to the endpoint without validating it first.
  • Use a WAP: Ensure that it can understand API payloads.
  • Use token: Establish trusted identities and then control access to services and resources by using tokens.
  • Use an API gateway: API gateways act as the major point of enforcement for API traffic. A good gateway will allow you to authenticate traffic as well as control and analyze how your APIs are used.

Modern API Data breach

USPS Cooperate Database Exposure

The weakness allowed an attacker to query the USPS website and scrape a database of over 60 million cooperate users, email addresses, phone numbers, account numbers, etc.

Exploitation

The issue was authentication-related which allowed unauthorized access to an API service called ‘informed visibility’, which was designed to deliver real-time tracking data for large-scale shipping operations.

This tracking system was tied into web API in a way that users could change the search parameters and view and even in some cases modify the information of other users. Since there wasn’t a robust anti-scraping system in place, this mass exposure was compounded by the automated and unfettered access available.

Lessons Learned

Providers giving extreme power to a specific service or function without securing every permutation of its interaction flow can lead to such exploits. To mitigate API-related risks, coding should be done with the assumption that the APIs might be abused by both internal and external forces.

References:

  1. https://www.redhat.com/en/topics/security/api-security
  2. https://searchapparchitecture.techtarget.com/definition/API-security
  3. https://nordicapis.com/5-major-modern-api-data-breaches-and-what-we-can-learn-from-them/

About the Author –

Logaiswar is a security enthusiast with core interest in Application & cloud security. He is part of the SOC DevSecOps vertical at GAVS supporting critical customer engagements.

Privacy Laws – Friends not Foes!

Barath Avinash

“Privacy means people know what they’re signing up for, in plain language, and repeatedly. I believe people are smart. Some people want to share more than other people do. Ask them.” – Steve Jobs

Cyber Security and Compliance Services

However futile a piece of data is today; it might be of high importance tomorrow. Misuse of personal data might lead to devastating consequences for the data owner and possibly the data controller.

Why is Data Privacy important?

For us to understand the importance of data privacy, the consequences of not implementing privacy protection must be understood. A very relevant example to understand this better is the Facebook-Cambridge Analytica scandal which potentially led to canvassing millions of Facebook users for an election without users’ explicit consent. 

To answer one long standing argument against privacy is that “I do not have anything to hide and so I do not care about privacy”. It is true that privacy can provide secrecy, but beyond that, privacy also provides autonomy and therefore freedom, which is more important than secrecy.

How can businesses benefit by being data privacy compliant?

Businesses can have multifold benefits for complying, implementing, and enforcing privacy practice within the organization. Once an organization is compliant with general data privacy principles, they also become mostly compliant with healthcare data protection laws, security regulations and standards. This reduces the effort an organization has to go through to be compliant on several other security and privacy regulations or standards. 

How can businesses use privacy to leverage competition?

With privacy being one of the highly sought out domain after the enactment of GDPR regulation for the EU followed by CCPA for USA and several other data protection laws around the world, businesses can leverage these for competitive advantage rather than looking at privacy regulations as a hurdle for their business and just as a mandatory compliance requirement. This can be achieved by being proactive and actively working to implement and enforce privacy practices within the organization. Establish regulatory compliance with the customers by means of asking for consent, being transparent with the data in use and by providing awareness. Educating people by providing data user centric awareness as compared to providing awareness for the sake of compliance is a good practice and thus will result in increasing the reputation of the business.

Why is privacy by design crucial?

Business should also focus on operations where implementing ‘privacy by design’ principle might build a product which would be compliant to privacy regulations as well as security regulations and standards through which a solidly built future proof product could be delivered.

The work doesn’t stop with enforcement and implementation, continual practice is necessary to maintain consistency and establish ongoing trust with customers.

With increasing statutory privacy regulations and laws in developed countries, several other countries have been either planning to enact privacy laws or have already started implementing them. This would be the right time for businesses located in developing countries to start looking into privacy practice so that it would be effortless when a privacy law is enacted and put into enforcement.

What’s wrong with Privacy Laws?

Privacy laws that are in practice come with their fair share of problems since they are relatively new.

  • Consent fatigue is a major issue with GDPR since it requires data owners to consent to processing or use of their data constantly, which tires the data owner and results in them ignoring privacy and consent notices when sent by the data processor or data collector.
  • Another common issue is sending multiple data requests by ill-motivated malicious users or automated computer bots to the data collector in order to bombard them with requests for data owner’s data which is available with the controller, this is a loophole under the ‘right to access’ of GDPR which is being exploited in some cases. This will burden the data protection officer to cause delay in sending requested data to the customer thus inviting legal consequences.
  • Misuse of privacy limitation guidelines are also a major problem in the GDPR space, time and again data collectors provide data processing purpose notice to data owners and subsequently use the same data for a different purpose without receiving proper consent from data owner thus often violating the law.

What the future holds for privacy?

As new privacy laws are in works, better and comprehensive laws will be brought in, learning from inconveniences of existing laws. Amendments for existing laws will also follow to enhance the privacy culture.

Privacy landscape is moving towards better and responsible use of user data, as the concept of privacy and its implementation matures with time, it is high time businesses start implementing privacy strategies primarily for business growth rather than merely for regulatory compliance. That is the goal every mature organization should aim towards and work on.

Privacy is firstly a human right; therefore, privacy laws are enacted on the basis of rights, because laws can be challenged and modified under court of justice, but rights cannot be.

References:

https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.htm

https://iapp.org/news/a/fake-dsars-theyre-a-thing/

About the Author –

Barath Avinash is part of GAVS’ security practice risk management team. He has a master’s degree in cyber forensics and information security. He is an information security and privacy enthusiast and his skillet include governance, compliance and cyber risk management.

Blockchain-based Platform for COVID-19 Vaccine Traceability

Srinivasan Sundararajan

Over the last few weeks, several pharma companies across world have announced vaccines for COVID. The respective governments are going through rigorous testing and approval processes to roll out vaccines soon.

The massive exercise of administering vaccines to billions of people across different geographies poses various challenges. Add to this the fact that different vaccines have strict conditions for storage and handling. Also, the entire history of traceability of the vaccine should be available.

While tracking the supply chain of any commodity in general and pharmaceutical products, in particular, is always complex, the COVID-19 vaccine poses tougher challenges. The following are the current temperature sensitivity needs of various vaccine manufacturers.  

best dcaas providers in usa

The information is from publicly available sites and should not be treated as a guideline for vaccine storage.

Blockchain to the Rescue

Even before the pandemic, Blockchain with its built-in ability to provide transparency across stakeholders has been a major platform for pharmaceutical traceability. The criticality for providing COVID-19 vaccine traceability has only strengthened the cause of utilizing blockchain for the supply chain in the pharma industry.

Blockchain networks with its base attributes like de-centralized ownership of data, single version of truth across stakeholders, the ability to ensure the data ownership based on cryptography-based security, and the ability to implement and manage business rules, will be a default platform handling the traceability of COVID-19 vaccines across multiple stakeholders.

Going beyond, Blockchain will also play a major role in the Identity and Credentialing of healthcare professionals involved, as well as the Consent Management of the patients who will be administered the vaccine. With futuristic technology needs like Health Passport, Digital Twin of a Person, Blockchain goes a long way in solving the current challenges in healthcare beyond streamlining the supply chain.

GAVS Blockchain Based Prototype for COVID-19 vaccine Traceability

GAVS has created a prototype of Blockchain-based network platform for vaccine traceability to demonstrate its usability. This solution has a much larger scope for extending to various healthcare use cases.

The below is the high-level process flow of the COVID-19 vaccine trial and various stakeholders involved.

digital transformation services and solutions

Image Source – www.counterpointresearch.com

Based on the use case and the stakeholders involved. GAVS prototype first creates a consortium using a private blockchain network. For the sake of simplicity, Distributors are not mentioned, but in real life, every stakeholder will be present. Individuals who receive the vaccine from hospitals are not part of the Network at this stage. But in future, their consent can be tracked using Blockchain.

Using Azure Blockchain Service, we can create private consortium blockchain networks where each blockchain network can be limited to specific participants in the network. Only participants in the private consortium blockchain network can view and interact with the blockchain. This ensures that sensitive information about vaccines are not exposed or misused.

data center consolidation initiative services

The following smart contracts are created as part of the solution with assigned ownership to the individual stake holders.

Blockchain solution and services

A glimpse of few of the smart contracts are listed for illustration purposes.

pragma solidity ^0.5.3;

pragma experimental ABIEncoderV2; 

contract Batch {

    string  public BatchId;

    string  public ProductName;

    string  public ProductType;

    string  public TempratureMaintained;

    string  public Efficacy;

    string  public Cost;

    address public CurrentOwner;

    address public ManufacturerAddr;

    address public AirLogAddr;

    address public LandLogAddr;

    address public HospAdminAddr;

    address public HospStaffAddr;

    string[] public AirTemp = new string[](10);

    string[] public LandTemp = new string[](10);

    string[] public HospTemp = new string[](20);

    string  public receiptNoteaddr;

    constructor  (string memory _batchId, string memory _productName,  string memory _productType,  string memory _TemperatureMaintained,  string memory _Efficacy,  string memory _Cost) public {

        ManufacturerAddr = msg.sender;

        BatchId = _batchId;

        ProductName = _productName ;

        ProductType = _productType;

        TemperatureMaintained = _TemperatureMaintained;

        Efficacy = _Efficacy;

        Cost = _Cost;

    }   

    modifier onlyOwner()    {

        require (msg.sender == CurrentOwner, “Only Current Owner Can Initiate This Action”);

        _;

    }      

    function updateOwner(address _addr) onlyOwner public{

       CurrentOwner = _addr;

    }        

    function retrieveBatchDetails() view  public returns (string memory, string memory, string memory, string memory, string memory, address, address, address, address, address, string[] memory, string[] memory, string[] memory, string memory) {

        return (BatchId,ProductName,TemperatureMaintained,Efficacy,Cost,ManufacturerAddr,AirLogAddr,LandLogAddr,HospAdminAddr,HospStaffAddr,AirTemp,LandTemp,HospTemp,receiptNoteaddr);  

    }

}  

The front end (Dapp) through which the traceability of the COVID-19 vaccine can be monitored is also developed and the following screenshots show certain important data flows.

Vaccine Traceability System Login Screen

best dcaas providers in usa

Traceability view for a particular batch of Vaccine

digital transformation services and solutions

Details of vaccinated patients entered by hospital

data center consolidation initiative services

Advantages of The Solution

  • With every vaccine monitored over the blockchain, each link along the chain could keep track of the entire process, and health departments could monitor the chain as a whole and intervene, if required, to ensure proper functioning.
  • Manufacturers could track whether shipments are delivered on time to their destinations.
  • Hospitals and clinics could better manage their stocks, mitigating supply and demand constraints. Furthermore, they would get guarantees concerning vaccine authenticity and proper storage conditions.
  • Individuals would have an identical guarantee for the specific vaccine they receive.
  • Overall this technology-driven approach will help to save the lives in this critical juncture.

 Extensibility to Future Needs

Gartner’s latest hypercycle for emerging technologies highlight several new technologies and notably Health Passport. As the travelers used to travel with a physical passport pandemic has created the need for a health passport, which is more like a digital health record that passengers can carry on their phones. Ideally, it should show the passengers past exposure to diseases and the vaccine records. By properly deploying health passports, several industries can revive themselves by allowing free-flowing movement of passengers across the globe.

The above blockchain solution though meant for COVID-19 traceability can potentially extended to a health passport once the patient also becomes part of it by a wallet based authentication mechanism, at GAVS we plan to explore the health passports on Blockchain in the coming months.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoperability) while bringing more trust and transparency into the healthcare process using patient consent management, credentialing, and zero knowledge proofs.

Tuning Agile Delivery for Customer and Employee Success

Ashish Joseph

What is Agile?

Agile has been very popular in the software development industry for empowering delivery to be more efficient and effective. It is a common misconception for Agile to be thought of as a framework or a process that follows a methodology for software development. But Agile is a set of values and principles. It is a collection of beliefs that teams can use for decision making and optimizing project deliveries. It is customer-centric and flexible, helping teams adapt accordingly. It doesn’t make the decision for the team. Instead, it gives a foundation for teams to make decisions that can result in a stellar execution of the project.

According to the Agile Manifesto, teams can deliver better by prioritizing the following over the other.

  • Individuals and Interactions over process and tools
  • Working Model over Comprehensive Documentation
  • Customer Collaboration over Contract Negotiation
  • Responding to Changes over following a Plan

With respect to Software Development, Agile is an iterative approach to project management which help teams deliver results with measurable customer value. The approach is designed to be faster and ensures the quality of delivery that is aided with periodic customer feedbacks. Agile aims to break down the requirement into smaller portions, results of which can be continuously evaluated with a natural mechanism to respond to changes quickly.

AIOps Artificial Intelligence for IT Operations

Why Agile?

The world is changing, and businesses must be ready to adapt to how the market demands change over time. When we look at the Fortune 500 companies from 1955, 88% of them perished. Nearly half of the S&P 500 companies is forecasted to be replaced every ten years. The only way for organizations to survive is to innovate continuously and understand the pulse of the market every step of the way. An innovative mindset helps organizations react to changes and discover new opportunities the market can offer them from time to time.

Agile helps organizations execute projects in an everchanging environment. The approach helps break down modules for continuous customer evaluation and implement changes swiftly.

The traditional approach to software project management uses the waterfall model, where we Plan, Build, Test, Review and Deploy. But this existing approach would result in iterations in the plan phase whenever there are deviations in the requirement with respect to the market. When teams choose agile, it helps them respond to changes in the marketplace and implement customer feedback without going off the plan. Agile plans are designed in such a manner to include continuous feedback and its corresponding changes. Organizations should imbibe the ability to adapt and respond fast to new and changing market demands. This foundation is imperative for modern software development and delivery.

Is Agile a right fit for my Customer? People who advocate Agile development claim that Agile projects succeed more often than waterfall delivery models. But this claim has not been validated by statistics. A paper titled “How Agile your Project should be?” by Dr. Kevin Thompson from Kevin Thompson Consulting, provides a perspective from a mathematical point of view for both Agile and Waterfall project management. Here both approaches were followed for the same requirements and were also affected by the same unanticipated variables. The paper focused on the statistical evidence to support the validity of both the options to evaluate the fit.

While assessing the right approach, the following questions need to be asked

  • Are the customer requirements for the project complete, clear and stable?
  • Can the project effort estimation be easily predicted?
  • Has a project with similar requirements been executed before?

If the answer to all the above questions are Yes, then Agile is not the approach to be followed.

The Agile approach provides a better return on investment and risk reduction when there is high uncertainty of different variables in the project. When the uncertainty is low, waterfall projects tend to be more cost effective than agile projects.

Optimizing Agile Customer Centricity

Customer centricity should be the foundation of all project deliveries. This help businesses align themselves to the customer’s mission and vision with respect to the project at hand. While we consider an Agile approach to a project in a dynamic and changing environment, the following are some principles that can help organizations align themselves better with their customer goals.

  • Prioritizing Customer Satisfaction through timely and continuous delivery of requirements.
  • Openness to changing requirements, regardless of the development phase, to enable customers to harness the change for their competitive advantage in the market.
  • Frequent delivery of modules with a preference towards shorter timelines.
  • Continuous collaboration between management and developers to understand the functional and non-functional requirements better.
  • Measuring progress through the number of working modules delivered.
  • Improving velocity and agility in delivery by concentrating on technical excellence and good design.
  • Periodic retrospection at the end of each sprint to improve delivery effectiveness and efficiency.
  • Trusting and supporting motivated individuals to lead projects on their own and allowing them to experiment.

Since Agile is a collection of principles and values, its real utility lies in giving teams a common foundation to make good decisions with actionable intelligence to deliver measurable value to their customers.

Agile Empowered Employee Success

A truly Agile team makes their decisions based on Agile values and principles. The values and principles have enough flexibility to allow teams to develop software in the ways that work best for their market situation while providing enough direction to help them to continually move towards their full potential. The team and employee empowerment through these values and principles aid in the overall performance.

Agile not only improves the team but also the environment around which it is established by helping employees to be compliant with respect to audit and governance.  It reduces the overall project cost for dynamic requirements and focuses on technical excellence along with an optimized process for its delivery. The 14th Annual State of Agile Report 2020 published by StateofAgile.com surveyed 40,000 Agile executives to get insights into the application of Agile across different areas of enterprises. The report surveyed different Agile techniques that contributed most towards the employee success of the organization. The following are some of the most preferred Agile techniques that helped enhance the employee and team performances.

Best AI Auto Discovery Tools

All the above Agile techniques help teams and individuals to introspect their actions and understand areas of improvement in real time with periodic qualitative and quantitative feedback. Each deliverable from multiple cross functional teams can be monitored, tracked and assessed under a single roof. All these techniques collectively bring together an enhanced form of delivery and empower each team to realize their full potential.
Above all, Agile techniques help teams to feel the pulse of the customer every step of the way. The openness to change regardless of the phase, helps them to map all the requirements leading to an overall customer satisfaction coupled with employee success.

Top 5 Agile Approaches

Best AIOps Platforms Software

A Truly Agile Organization

Majority of the Agile approach has been concentrated towards development, IT, and Operations. However, organizations should strive towards effective alignment and coordination across all departments. Organizations today are aiming for greater expansion of agility into areas beyond building, deploying, and maintaining software. At the end of the day, Agile is not about the framework. It is all about the Agile values and principles the organizations believe in for achieving their mission and vision in the long run.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management. He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Health Information Exchanges in Post-Pandemic Healthcare

Srinivasan Sundararajan

Electronic Health Information Exchange (HIE) allows doctors, nurses, pharmacists, other health care providers and patients to appropriately access and securely share a patient’s vital medical information electronically – improving the speed, quality, safety, and cost of patient care.

HIE enables electronical movement of clinical information among different healthcare information systems. The goal is to facilitate access to and retrieval of clinical data to provide safer and more timely, efficient, effective, and equitable patient-centered care.

While the importance of HIE is clearly visible, now the important question is how hospitals can collaborate to form an HIE and how the HIE will consolidate data from disparate patient information sources. This brings us to the important discussion of HIE data models.

HIE Data Models 

There are multiple ways in which an HIE can get its data, each influencing the way in which the interoperability goals are achieved, how easily an HIE platform is built and how to sustain in the long run especially if the number of hospitals in the ecosystem increases. The two models are

  • Centralized
  • De-centralized

Centralized HIE Data Model

ehr modernization services with healthcare data

This is a pictorial representation of centralized HIE data model.

As evident, in the centralized model, all the stakeholders send their data to a centralized location and typically a ETL (Extraction, Transformation, and Loading) process ensures that all the data is synced with the centralized server.

Advantages

  • From the query performance perspective, this model is one of the most efficient, because the DBAs have complete control of the data and with the techniques like partitioning, indexing they could ensure that the query can be done in the best possible manner. Since the hardware is fully owned by a single organization (which is the HIE itself), this can be scaled out or scaled up to meet the demands of the business.
  • This model is fairly self-sufficient once the mechanism for the data transfers are established, as the need to connect to individual hospitals are no longer there.
  • Smaller hospitals in the ecosystem need not take the burden of maintaining their data and interoperability needs and can just send their data to the centralized repository.
  • Better scope for performing population predictive and prescriptive analytics as the data resides in one place and easier to create models based on the historical data.

Limitations

  • This model needs to have highest level of security built in, because any breach in the system will compromise the data of entire ecosystem. Also considering that individual hospitals send their data to this model, all the responsibility lies with a single agency (HIE) which is highly prone to lawsuits related to data privacy and confidentiality.
  • There is no control for patients in managing their own records and right to provide consent for data access, even though this information can be collected there is no easy way to implement them.
  • The system is prone to a single point of failure and hence require efforts for high availability of the platform.
  • This model will face scalability challenges as the network grows beyond a point, unless the platform is modernized with latest big data databases, the system will have scalability issues.
  • Lot of coordination required to monitor the individual ETL jobs for their success, failure, and record synchronization details, so this model will have a huge allocation of IT resources and will increase the total cost of ownership.
  • The model of expense sharing between the HIE, data producers, data consumers will be difficult and needs to have a strong governance model.
  • Difficult to match the patient information across hospitals, unless the both the hospitals use deterministic matching attributes like SSN, otherwise it would be difficult to match between patients who have misspelt names, different addresses, etc.
  • This model may suffer data integrity issues when the participant hospitals merge with each other, such that the IT systems of the two hospitals need to take care the internal details of the ETL jobs.

De-centralized HIE Data Model

ehr modernization with health data management platforms

The above is the pictorial representation of decentralized HIE data model.

As evident, in this model individual hospitals, continue to own all their data, however, the centralized database keeps a pointer – MPI (Master Patient Index), which serves as a unifying factor for consolidating data for that patient. While some books also suggest a variant called Hybrid model which combines centralized and decentralized data models, we believe that pure play decentralized model itself is a hybrid (i.e. centralized + decentralized) because there needs to be a centralized repository to keep the master patient index along with all the access rights and related information.

Advantages

  • It is much easier to implement as no huge investment is required from a centralized provider perspective. The HIEs in this model can start low and grow on demand basis
  • Less expensive as no single organization owns all the data, but only a pointer to the data, and the respective hospitals continue to own the data.
  • Much easier to provide patients the control of their own data and patient’s consent can be a key in accessing information from the respective hospitals.
  • No need to worry about broken ETL jobs and the latency between source and destination. All the data is always current.
  • No need to worry about single point of failure, as the individual sub systems will continue to exist even if one link to a particular hospital is broken. Maintaining the high availability of this light-weight platform is much easier than a monolithic large database as part of a centralized data model.
  • A data breach into the centralized repository still will not compromise all the data, as the individual hospitals are likely to have some more additional controls which prevent a free flow for hackers. This also prevents one organization from facing all the legal issues resulting from patient data breach.

Limitations

  • This model will have a query performance problem when it comes to aggregating a patient information across multiple hospitals because each has to be obtained with a separate API call and a facade has to group multiple datasets.
  • Difficult to establish common standards in terms of data formats and APIs across multiple hospitals, this may result in each hospital having their own methods.
  • Bringing all the stakeholders including the patients to agree on a MPI (Master Patient Index) will have governance challenges and needs to be implemented carefully.
  • Providing analytics for a large set of population will have challenges due to the difficulties in consolidating the data.

GAVS Point of View & Role of Blockchain

While no model can be 100% perfect for building an HIE, GAVS’ analysis point to that fact decentralized model of building and operating HIE is better than centralized model.  The COVID pandemic has changed the world and the boundaries of healthcare no longer exist within a smaller geography or neighbourhood as it used to. More the participants and bigger the network size, the better it is for population health improvement initiatives. Also, in high population countries where there are initiatives like national healthcare for all, these larger initiatives cannot be done using a pure play centralized model.

From an implementation perspective, the Healthcare IT world has been curiously watching the role of Blockchain in data interoperability and in the implementation of decentralized HIE. Blockchain which is a distributed database has decentralization built-in as part of its core architecture. It would be easier to implement decentralized HIE using blockchain.

GAVS Reference Implementation Rhodium to cater to Healthcare Data Management and Interoperability has positioned Blockchain as a core mechanism for patient data sharing, we will share more of our thoughts and details of reference implementation in the coming articles in this series.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoprability), while bringing more trust and transparency into the healthcare process using patient consent management, credentialing and zero knowledge proofs.

Palo Alto Firewall – DNS Sinkhole

Ganesh Kumar J

Starting with PAN-OS 6.0, DNS sinkhole is an action that can be enabled in Anti-Spyware profiles. A DNS sinkhole can be used to identify infected hosts on a protected network using DNS traffic in environments where the firewall can see the DNS query to a malicious URL.

The DNS sinkhole enables the Palo Alto Networks device to forge a response to a DNS query for a known malicious domain/URL and causes the malicious domain name to resolve to a definable IP address (fake IP) that is given to the client. If the client attempts to access the fake IP address and there is a security rule in place that blocks traffic to this IP, the information is recorded in the logs.

Sample Flow

We need to keep the following in mind before assigning an IP address to DNS sinkhole configuration.

When choosing a “fake IP”, make sure that the IP address is a fictitious IP address that does not exist anywhere inside the network. DNS and HTTP traffic must pass through the Palo Alto Networks firewall for the malicious URL to be detected and for the access to the fake IP to be stopped. If the fake IP is routed to a different location, and not through the firewall, this will not work properly.

Steps:

  1. Make sure the latest Antivirus updates are installed on the Palo Alto Networks device. From the WebUI, go to Device > Dynamic Updates on the left. Click “Check Now” in the lower left, and make sure that the Anti-Virus updates are current. If they are not, please do that before proceeding. The Automatic Updates can be configured if they are not setup.

Fig1.1

IT Automation with AI

Note: A paid Threat Prevention subscription for the DNS sinkhole is required to function properly.

  1. Configure the DNS Sinkhole Protection inside an Anti-Spyware profile. Click on the Objects > Anti-Spyware under Security Profiles on the left.
    Use either an existing profile or create a new profile. In the example below the “alert-all” is being used:

Fig1.2:

Office 365 Migration

Click the name of the profile – alert-all, click on the DNS Signatures tab.

Fig1.3:

Software Test Automation Platform

Change the “Action on DNS queries” to ‘sinkhole’ if it is not already set to sinkhole.
Click on the Sinkhole IPv4 field, either select the default Palo Alto Networks Sinkhole IP (72.5.65.111) or a different IP of your choosing. If you opt to use your own IP, ensure the IP is not used inside your network and preferably not routable over the internet (RFC1918).
Click on Sinkhole IPv6 and enter a fake IPv6 IP. Even if IPv6 is not used, something still needs to be entered. The example shows ::1. Click OK. 

Note: If nothing is entered for the Sinkhole IPv6 field, OK will remain grayed out.

  1. Apply the Anti-Spyware profile on the security policy that allows DNS traffic from the internal network (or internal DNS server) to the internet. Click on Policies> Security on the left side. Inside the rules, locate the rule that allows DNS traffic outbound, click on the name, go to the Actions tab, and make sure that the proper Anti-Spyware profile is selected. Click OK..

Fig1.4:

Software Product Engineering Services

  1. The last thing needed is to have a security rule that will block all web-browsing and SSL access to the fake IP 72.5.65.111 and also :1 if using IPv6. This will ensure to deny traffic to the fake IP from any infected machines.

Fig1.5:

Security Iam Management Tools

  1. Commit the configuration

Fig1.6:

Rpa in Infrastructure Management

(To be continued…)

References:

About the Author –

Ganesh is currently managing Network, Security and engineering team for a large US based customer. He has been associated with the Network & Security domain for more than 15 years.

Container Security

Anandharaj V

We live in a world of innovation and are beneficiaries of new advancements. New advancements in software technology also comes with potential security vulnerabilities.

‘Containers’ are no exception. Let us first understand what a container is and then the vulnerabilities associated with it and how to mitigate them.

What is a Container?

You might have seen containers in the shipyard. It is used to isolate different cargos which is transported via ships. In the same way, software technologies use a containerization approach.

Containers are different from Virtual Machines (VM) where VMs need a guest operating system which runs on a host operating system (OS). Containers uses OS virtualization, in which required processes, CPU, Memory, and disk are virtualized so that containers can run without a separate operating system.

In containers, software and its dependencies are packaged so that it can run anywhere whether on-premises desktop or in the cloud.

IT Infrastructure Managed Services

Source: https://cloud.google.com/containers

As stated by Google, “From Gmail to YouTube to Search, everything at Google runs in containers”.

Container Vulnerabilities and Countermeasures

Containers Image Vulnerabilities

While creating a container, an image may be patched without any known vulnerabilities. But a vulnerability might have been discovered later, while the container image is no longer patched. For traditional systems, it can be patched when there is a fix for the vulnerability without making any changes but for containers, updates should be upstreamed in the images, and then redeployed. So, containers have vulnerabilities because of the older image version which is deployed.

Also, if the container image is misconfigured or unwanted services are running, it will lead to vulnerabilities.

Countermeasures

If you use traditional vulnerability assessment tools to assess containers, it will lead to false positives. You need to consider a tool that has been designed to assess containers so that you can get actionable and reliable results.

To avoid container image misconfiguration, you need to validate the image configuration before deploying.

Embedded Malware and Clear Text Secrets

Container images are collections of files packaged together. Hence, there are chances of malicious files getting added unintentionally or intentionally. That malicious software will have the same effect as of the traditional systems.

If secrets are embedded in clear text, it may lead to security risks if someone unauthorized gets access.

Countermeasures

Continuous monitoring of all images for embedded malware with signature and behavioral detection can mitigate embedded malware risks.

 Secrets should never be stored inside of containers image and when required, it should be provided dynamically at runtime.

Use of Untrusted Images

Containers have the advantages of ease of use and portability. This capability may lead teams to run container images from a third party without validating it and thus can introducing data leakage, malware, or components with known vulnerabilities.

Countermeasures

Your team should maintain and use only trusted images, to avoid the risk of untrusted or malicious components being deployed.

Registry Risks

Registry is nothing but a repository for storing container images.

  1. Insecure connections to registries

Images can have sensitive information. If connections to registries are performed over insecure channels, it can lead to man-in-the-middle attacks that could intercept network traffic to steal programmer or admin credentials to provide outdated or fraudulent images.

You should configure development tools and containers while running, to connect only over the encrypted medium to overcome the unsecured connection issue.

  1. Insufficient authentication and authorization restrictions

As we have already seen that registries store container images with sensitive information. Insufficient authentication and authorization will result in exposure of technical details of an app and loss of intellectual property. It also can lead to compromise of containers.

Access to registries should authenticated and only trusted entities should be able to add images and all write access should be periodically audited and read access should be logged. Proper authorization controls should be enabled to avoid the authentication and authorization related risks.

Orchestrator Risks

  1. Unbounded administrative access

There are many orchestrators designed with an assumption that all the users are administrators but, a single orchestrator may run different apps with different access levels. If you treat all users as administrators, it will affect the operation of containers managed by the orchestrator.

Orchestrators should be given the required access with proper role-based authorization to avoid the risk of unbounded administrative access.

  1. Poorly separated inter-container network traffic

In containers, traffic between the host is routed through virtual overlay networks. This is managed by the orchestrator. This traffic will not be visible to existing network security and management tools since network filters only see the encrypted packets traveling between the hosts and will lead to security blindness. It will be ineffective in monitoring the traffic.

To overcome this risk, orchestrators need to configure separate network traffic as per the sensitivity levels in the virtual networks.

  1. Orchestrator node trust

You need to give special attention while maintaining the trust between the hosts, especially the orchestrator node. Weakness in orchestrator configuration will lead to increased risk. For example, communication can be unencrypted and unauthenticated between the orchestrator, DevOps personnel, and administrators.

To mitigate this, orchestration should be configured securely for nodes and apps. If any node is compromised, it should be isolated and removed without disturbing other nodes.

Container Risks

  1. App vulnerabilities

It is always good to have a defense. Even after going through the recommendations, we have seen above; containers may still be compromised if the apps are vulnerable.

As we have already seen that traditional security tools may not be effective when you use it for containers. So, you need a container aware tool which will detect behavior and anomalies in the app at run time to find and mitigate it.

  1. Rogue containers

It is possible to have rogue containers. Developers may have launched them to test their code and left it there. It may lead to exploits as those containers might not have been thoroughly checked for security loopholes.

You can overcome this by a separate environment for development, test, production, and with a role-based access control.

Host OS Risks

  1. Large attack surface

Every operating system has its attack surface and the larger the attack surface, the easier it will be for the attacker to find it and exploit the vulnerability and compromise the host operating system and the container which run on it.

You can follow the NIST SP 800-123 guide to server security if you cannot use container specific operating system to minimize the attack surface.

  1. Shared kernel

If you only run containers on a host OS you will have a smaller attack surface than the normal host machine where you will need libraries and packages when you run a web server or a database and other software.

You should not mix containers and non-containers workload on the same host machine.

If you wish to further explore this topic, I suggest you read NIST.SP.800-190.


References

About the Author –

Anandharaj is a lead DevSecOps at GAVS and has over 13 years of experience in Cybersecurity across different verticals which include Network Security, application Security, computer forensics and cloud security.

IAST: A New Approach to Finding Security Vulnerabilities

Roberto Velasco
CEO, Hdiv Security

One of the most prevalent misconceptions about cybersecurity, especially in the mainstream media and also among our clients, is that to conduct a successful attack against an IT system it is necessary to ‘investigate’ and find a new defect in the target’s system.

However, for most security incidents involving internet applications, it is enough to simply exploit existing and known programming errors.

For instance, the dramatic Equifax breach could have been prevented by following basic software security best-practices, such as patching the system to prevent known vulnerabilities. That was, in fact, one of the main takeaways from the forensic investigation led by the US federal government.

One of the most important ways to reduce security risks is to ensure that all known programming errors are corrected before the system is exposed to internet traffic. Research bodies such as the US NIST found that correcting security bugs early on is orders of magnitude cheaper than doing so when the development has been completed.

When composing a text in a text editor, the spelling and grammar corrector highlights the mistakes in the text. Similarly, there are security tools known as AST (Application Security Testing) that find programming errors that introduce security weaknesses. ASTs report the file and line where the vulnerability is located, in the same way, that a text editor reports the page and the line that contains a typo.

In other words, these tools allow developers to build software that is largely free of security-related programming errors, resulting in more secure applications.

Just like it is almost impossible to catch all errors in a long piece of text, most software contains many serious security vulnerabilities. The fact that some teams do not use any automated help at all, makes these security weaknesses all the most prevalent and easy to exploit.

Let’s take a look at the different types of security issue detection tools also known as ASTs, or vulnerability assessment tools, available in the market.

The Traditional Approach

Two mature technologies capture most of the market: static code analysis (SAST) and web scanners (dynamic analysis or DAST). Each of these two families of tools is focused on a different execution environment.

The SAST static analysis, also known as white-box analysis because the tool has access to the source code of the application, scans the source code looking for known patterns that indicate insecure programming that could lead to a vulnerability.

The DAST dynamic analysis replicates the view of an attacker. At this point, the tool executes hundreds or thousands of queries against the application designed to replicate the activity of an attacker to find security vulnerabilities. This is a black-box analysis because the point of view is purely external, with no knowledge of the application’s internal architecture.

The level of detail provided by the two types of tools is different. SAST tools provide file and line where the vulnerability is located, but no URL, while DAST tools provide the external URL, but no details on the location of the problem within the code base of the application. Some teams use both tools to improve visibility, but this requires long and complex triaging to manage the vulnerabilities.

The Interactive AST Approach

The Interactive Application Security Testing (IAST) tools combine the static approach and the dynamic approach. They have access to the internal structure of the application, and to the way it behaves with actual traffic. This privileged point of view is ideal to conduct security analysis.

From an architecture point of view, the IAST tools become part of the infrastructure that hosts the web applications, because an IAST runs together with the application server. This approach is called instrumentation, and it is implemented by a component known as an agent. Other platforms such as Application Performance Monitoring tools (APMs) share this proven approach.

Once the agent has been installed, it incorporates automatic security sensors in the critical execution points of the application. These sensors monitor the dataflow between requests and responses, the external components that the application includes, and data operations such as database access. This broad-spectrum coverage is much better than the visibility that SAST and DAST rely on.

In terms of specific results, we can look at two important metrics – how many types of vulnerabilities the tool finds, and how many of the identified vulnerabilities are false positives. Well, the best DAST is able to find only 18% of the existing vulnerabilities on a test application. And even worse, around 50% of the vulnerabilities reported by the best SAST static analysis tool are not true problems!

IT Automation with AI

Source: Hdiv Security via OWASP Benchmark public result data

The IAST approach provides these tangible benefits:

  1. Complete coverage, because the entire application is reviewed, both the custom code and the external code, such as open-source components and legacy dependencies.
  2. Flexibility, because it can be used in all environments; development, quality assurance (QA), and production.
  3. High accuracy, because the combination of static and dynamic point of views allow us to find more vulnerabilities with no false positives.
  4. Complete vulnerability information, including the static aspects (source code details) and dynamic aspects (execution details).
  5. Reduction of the duration of the security verification phase, so that the time-to-market of the secure applications is shorter.
  6. Compatible with agile development methodologies, such as DevSecOps, because it can be easily automated, and reduces the manual verification activities

IAST tool can add tons of value to the security tooling of any organization concerned with the security of the software.

In the same way that everyone uses an automated spell checker to find typos in a document, we believe that any team would benefit from an automated validation of the security of an application.

However, the AST does not represent a security utopia, since they can only detect security problems that follow a common pattern.

About the Author –

Roberto Velasco is the CEO of Hdiv Security. He has been involved with the IT and security industry for the past 16 years and is experienced in software development, software architecture and application security across different sectors such as banking, government and energy. Prior to founding Hdiv Security, Roberto worked for 8 years as a software architect and co-founded ARIMA, a company specialized in software architecture. He regularly speaks at Software Architecture and cybersecurity conferences such as Spring I/O and APWG.eu.

Business Intelligence Platform RESTful Web Service

Albert Alan

Restful API

RESTful Web Services are REST architecture based web services. Representational State Transfer (REST) is a style of software architecture for distributed systems such as the World Wide Web. In this architectural style, data and functionality is considered resources and are accessed using Uniform Resource Identifiers (URIs), typically links on the Web.

RESTful Web Service

REST has some advantages over SOAP (Simple Objects Access Protocol) but is similar in technology since it is also a function call via HTTP protocol. REST is easier to call from various platforms, transfers pure human-readable data in JSON or XML and is faster and saves resources.

In the basic idea of REST, an object is accessed via REST, not its methods. The state of the object can be changed by the REST access. The change is caused by the passed parameters. A frequent application is the connection of the SAP PI via the REST interface.

When to use Rest Services

  • You want to access BI platform repository objects or perform basic scheduling.
  • You want to use a programming language that is not supported by another BI platform SDK.
  • You want to extract all the query details and number of records per query for all the reports like Webi and Crystal, etc.
  • You want to extract folder path of all reports at once.

Process Flow

RESTful Web Service

RESTful Web Service Requests

To make a RESTful web service request, you need the following:

  • URL – The URL that hosts the RESTful web service.
  • Method – The type of HTTP method to use for sending the request, for example GET, PUT, POST, or DELETE.
  • Request header – The attributes that describe the request.
  • Request body – Additional information that is used to process the request.

Common RWS Error Messages

RESTful Web Service

Restful Web Service URIs Summary List

URLResponseComments
  /v1Service document that contains a link to the /infostore API.This is the root level of an infostore resource
  /v1/infostoreFeed contains all the objects in BOE system/v1/infostore
  /v1/infostore/ <object_id>Entry corresponding to the info object with SI_ID=./v1/infostore/99
      /v1/logon/longReturns the long form for logon, which contains the user and password authentication template.Used to logon to the BI system based on the authentication method.
  /v1/users/ <user_id>  XML feed of user details in BOE systemYou can Modify user using PUT method and DELETE user using DELETE method.
    /v1/usergroups/ <usergroup_id>    XML feed of user group details in BOE systemSupport GET and PUT and DELETE method. You can Modify user group using PUT method and DELETE user group using DELETE method.
  v1/folders/ <folder_id>XML feed displays the details of the folder, can be used to modify the details of the folder, and delete the folder.You modify the folder using PUT method and DELETE the folder using DELETE method
  /v1/publicationsXML feed of all publications created in BOE systemThis API supports GET method only.

Extended Workflow

 The workflow is as follows:

  • To Pass the Base URL

GET http:///localhost:6405/biprws/v1/users

  • To Pass the Headers

  • To Get the xml/json response

Automation of Rest Call

The Business Intelligence platform RESTful Web Service  (BI-REST-SDK) allows you to programmatically access the BI platform functionalities such as administration, security configuration and modification of the repository. In addition, to the Business Intelligence platform RESTful web service SDK, you can also use the SAP Crystal Reports RESTful Web Services  (CR REST SDK) and SAP Web Intelligence RESTful Web Services (WEBI REST SDK).

Implementation

An application has been designed and implemented using Java to automate the extraction of SQL query for all the webi reports from the server at once.

Tools used:

  • Postman (Third party application)
  • Eclipse IDE

The structure of the application is as below:

The application file comprises of the required java jar files, java class files, java properties files and logs. Java class files (SqlExtract) are the source code and will be compiled and executed using command prompt as:

Step 1

  • Javac -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command compiles the java code.

Step 2

  • Java -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command runs the compiled java file.

The java properties file (log4j) is used to set the configurations for the java code to run. Also, the path for the log file can be set in the properties file.

RESTful Web Service

The logs (SqlExtractLogger) consist of the required output file with all the extracted query for the webi reports along with the data source name, type and the row count for each query in the respective folder in the path set by the user in properties file.

RESTful Web Service

The application is standalone and can run in any windows platform or server which has java JRE (version greater than 1.6 – preferred) installed in it.

Note: All the above steps required to execute the application are consolidated in the (steps) file.

Conclusion

SAP BO provides Restful web service to traverse through its repository, to fetch structural info and to modify the metadata structure based on the user requirements. When integrated with programming languages like python, java, etc., extends the scope to a greater extent, allowing the user to automate the workflows and to solve the backtracking problems.

Handling Restful web service needs expertise in server administration and programming as changes made to the metadata are irreversible.

References

About the Author –

Alan is a SAP Business Intelligence consultant with a critical thinking and an analytical mind. He believes in ‘The more extensive a man’s knowledge of what has been done, the greater will be his power of knowing what to do’.

Artificial Intelligence in Healthcare

Dr. Ramjan Shaik

Scientific progress is about many small advancements and occasional big leaps. Medicine is no exception. In a time of rapid healthcare transformation, health organizations must quickly adapt to evolving technologies, regulations, and consumer demands. Since the inception of electronic health record (EHR) systems, volumes of patient data have been collected, creating an atmosphere suitable for translating data into actionable intelligence. The growing field of artificial intelligence (AI) has created new technology that can handle large data sets, solving complex problems that previously required human intelligence. AI integrates these data sources to develop new insights on individual health and public health.

Highly valuable information can sometimes get lost amongst trillions of data points, costing the industry around $100 billion a year. Providers must ensure that patient privacy is protected, and consider ways to find a balance between costs and potential benefits. The continued emphasis on cost, quality, and care outcomes will perpetuate the advancement of AI technology to realize additional adoption and value across healthcare. Although most organizations utilize structured data for analysis, valuable patient information is often “trapped” in an unstructured format. This type of data includes physician and patient notes, e-mails, and audio voice dictations. Unstructured data is frequently richer and more multifaceted. It may be more difficult to navigate, but unstructured data can lead to a plethora of new insights. Using AI to convert unstructured data to structured data enables healthcare providers to leverage automation and technology to enhance processes, reduce the staff required to monitor patients while filling gaps in healthcare labor shortages, lower operational costs, improve patient care, and monitor the AI system for challenges.

AI is playing a significant role in medical imaging and clinical practice. Providers and healthcare organizations have recognized the importance of AI and are tapping into intelligence tools. Growth in the AI health market is expected to reach $6.6 billion by 2021 and to exceed $10 billion by 2024.  AI offers the industry incredible potential to learn from past encounters and make better decisions in the future. Algorithms could standardize tests, prescriptions, and even procedures across the healthcare system, being kept up-to-date with the latest guidelines in the same way a phone’s operating system updates itself from time to time.

There are three main areas where AI efforts are being invested in the healthcare sector.

  • Engagement – This involves improvising on how patients interact with healthcare providers and systems.
  • Digitization – AI and other digital tools are expected to make operations more seamless and cost-effective.
  • Diagnostics – By using products and services that use AI algorithms diagnosis and patient care can be improved.

AI will be most beneficial in three other areas namely physician’s clinical judgment and diagnosis, AI-assisted robotic surgery, and virtual nursing assistants.

Following are some of the scenarios where AI makes a significant impact in healthcare:

  • AI can be utilized to provide personalized and interactive healthcare, including anytime face-to-face appointments with doctors. AI-powered chatbots can be powered with technology to review the patient symptoms and recommend whether a virtual consultation or a face-to-face visit with a healthcare professional is necessary.
  • AI can enhance the efficiency of hospitals and clinics in managing patient data, clinical history, and payment information by using predictive analytics. Hospitals are using AI to gather information on trillions of administrative and health record data points to streamline the patient experience. This collaboration of AI and data helps hospitals/clinics to personalize healthcare plans on an individual basis.
  • A taskforce augmented with artificial intelligence can quickly prioritize hospital activity for the benefit of all patients. Such projects can improve hospital admission and discharge procedures, bringing about enhanced patient experience.
  • Companies can use algorithms to scrutinize huge clinical and molecular data to personalize healthcare treatments by developing AI tools that collect and analyze data from genetic sequencing to image recognition empowering physicians in improved patient care. AI-powered image analysis helps in connecting data points that support cancer discovery and treatment.
  • Big data and artificial intelligence can be used in combination to predict clinical, financial, and operational risks by taking data from all the existing sources. AI analyzes data throughout a healthcare system to mine, automate, and predict processes. It can be used to predict ICU transfers, improve clinical workflows, and even pinpoint a patient’s risk of hospital-acquired infections. Using artificial intelligence to mine health data, hospitals can predict and detect sepsis, which ultimately reduces death rates.
  • AI helps healthcare professionals harness their data to optimize hospital efficiency, better engage with patients, and improve treatment. AI can notify doctors when a patient’s health deteriorates and can even help in the diagnosis of ailments by combing its massive dataset for comparable symptoms. By collecting symptoms of a patient and inputting them into the AI platform, doctors can diagnose quickly and more effectively.   
  • Robot-assisted surgeries ranging from minimally-invasive procedures to open-heart surgeries enables doctors to perform procedures with precision, flexibility, and control that goes beyond human capabilities, leading to fewer surgery-related complications, less pain, and a quicker recovery time. Robots can be developed to improve endoscopies by employing the latest AI techniques which helps doctors get a clearer view of a patient’s illness from both a physical and data perspective.

Having understood the advancements of AI in various facets of healthcare, it is to be realized that AI is not yet ready to fully interpret a patient’s nuanced response to a question, nor is it ready to replace examining patients – but it is efficient in making differential diagnoses from clinical results. It is to be understood very clearly that the role of AI in healthcare is to supplement and enhance human judgment, not to replace physicians and staff.

We at GAVS Technologies are fully equipped with cutting edge AI technology, skills, facilities, and manpower to make a difference in healthcare.

Following are the ongoing and in-pipeline projects that we are working on in healthcare:

ONGOING PROJECT:

AI Devops Automation Service Tools

PROJECTS IN PIPELINE:

AIOps Artificial Intelligence for IT Operations
AIOps Digital Transformation Solutions
Best AI Auto Discovery Tools
Best AIOps Platforms Software

Following are the projects that are being planned:

  • Controlling Alcohol Abuse
  • Management of Opioid Addiction
  • Pharmacy Support – drug monitoring and interactions
  • Reducing medication errors in hospitals
  • Patient Risk Scorecard
  • Patient Wellness – Chronic Disease management and monitoring

In conclusion, it is evident that the Advent of AI in the healthcare domain has shown a tremendous impact on patient treatment and care. For more information on how our AI-led solutions and services can help your healthcare enterprise, please reach out to us here.

About the Author –

Dr. Ramjan is a Data Analyst at GAVS. He has a Doctorate degree in the field of Pharmacy. He is passionate about drawing insights out of raw data and considers himself to be a ‘Data Person’.

He loves what he does and tries to make the most of his work. He is always learning something new from programming, data analytics, data visualization to ML, AI, and more.