Why is AIOps an Industrial Benchmark for Organizations to Scale in this Economy?

Ashish Joseph

Business Environment Overview

In this pandemic economy, the topmost priorities for most companies are to make sure the operations costs and business processes are optimized and streamlined. Organizations must be more proactive than ever and identify gaps that need to be acted upon at the earliest.

The industry has been striving towards efficiency and effectivity in its operations day in and day out. As a reliability check to ensure operational standards, many organizations consider the following levers:

  1. High Application Availability & Reliability
  2. Optimized Performance Tuning & Monitoring
  3. Operational gains & Cost Optimization
  4. Generation of Actionable Insights for Efficiency
  5. Workforce Productivity Improvement

Organizations that have prioritized the above levers in their daily operations require dedicated teams to analyze different silos and implement solutions that provide the result. Running projects of this complexity affects the scalability and monitoring of these systems. This is where AIOps platforms come in to provide customized solutions for the growing needs of all organizations, regardless of the size.

Deep Dive into AIOps

Artificial Intelligence for IT Operations (AIOps) is a platform that provides multilayers of functionalities that leverage machine learning and analytics.  Gartner defines AIOps as a combination of big data and machine learning functionalities that empower IT functions, enabling scalability and robustness of its entire ecosystem.

These systems transform the existing landscape to analyze and correlate historical and real-time data to provide actionable intelligence in an automated fashion.

Data Center Migration Planning Tools

AIOps platforms are designed to handle large volumes of data. The tools offer various data collection methods, integration of multiple data sources, and generate visual analytical intelligence. These tools are centralized and flexible across directly and indirectly coupled IT operations for data insights.

The platform aims to bring an organization’s infrastructure monitoring, application performance monitoring, and IT systems management process under a single roof to enable big data analytics that give correlation and causality insights across all domains. These functionalities open different avenues for system engineers to proactively determine how to optimize application performance, quickly find the potential root causes, and design preventive steps to avoid issues from ever happening.

AIOps has transformed the culture of IT war rooms from reactive to proactive firefighting.

Industrial Inclination to Transformation

The pandemic economy has challenged the traditional way companies choose their transformational strategies. Machine learning-powered automations for creating an autonomous IT environment is no longer a luxury. The usage of mathematical and logical algorithms to derive solutions and forecasts for issues have a direct correlation with the overall customer experience. In this pandemic economy, customer attrition has a serious impact on the annual recurring revenue. Hence, organizations must reposition their strategies to be more customer-centric in everything they do. Thus, providing customers with the best-in-class service coupled with continuous availability and enhanced reliability has become an industry standard.

As reliability and scalability are crucial factors for any company’s growth, cloud technologies have seen a growing demand. This shift of demand for cloud premises for core businesses has made AIOps platforms more accessible and easier to integrate. With the handshake between analytics and automation, AIOps has become a transformative technology investment that any organization can make.

As organizations scale in size, so does the workforce and the complexity of the processes. The increase in size often burdens organizations with time-pressed teams having high pressure on delivery and reactive housekeeping strategies. An organization must be ready to meet the present and future demands with systems and processes that scale seamlessly. This why AIOps platforms serve as a multilayered functional solution that integrates the existing systems to manage and automate tasks with efficiency and effectivity. When scaling results in process complexity, AIOps platforms convert the complexity to effort savings and productivity enhancements.

Across the industry, many organizations have implemented AIOps platforms as transformative solutions to help them embrace their present and future demand. Various studies have been conducted by different research groups that have quantified the effort savings and productivity improvements.

The AIOps Organizational Vision

As the digital transformation race has been in full throttle during the pandemic, AIOps platforms have also evolved. The industry did venture upon traditional event correlation and operations analytical tools that helped organizations reduce incidents and the overall MTTR. AIOps has been relatively new in the market as Gartner had coined the phrase in 2016.  Today, AIOps has attracted a lot of attention from multiple industries to analyze its feasibility of implementation and the return of investment from the overall transformation. Google trends show a significant increase in user search results for AIOps during the last couple of years.

Data Center Consolidation Initiative Services

While taking a well-informed decision to include AIOps into the organization’s vision of growth, we must analyze the following:

  1. Understanding the feasibility and concerns for its future adoption
  2. Classification of business processes and use cases for AIOps intervention
  3. Quantification of operational gains from incident management using the functional AIOps tools

AIOps is truly visioned to provide tools that transform system engineers to reliability engineers to bring a system that trends towards zero incidents.

Because above all, Zero is the New Normal.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management. He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Patient Segmentation Using Data Mining Techniques

Srinivasan Sundararajan

Srinivasan Sundararajan

Patient Segmentation & Quality Patient Care

As the need for quality and cost-effective patient care increases, healthcare providers are increasingly focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. Simply put, data-driven healthcare is augmenting the human intelligence based on experience and knowledge.

Segmentation is the standard technique used in Retail, Banking, Manufacturing, and other industries that needs to understand their customers to provide better customer service. Customer segmentation defines the behavioral and descriptive profiles of customers. These profiles are then used to provide personalized marketing programs and strategies for each group.

In a way, patients are like customers to healthcare providers. Though the element of quality of care takes precedence than profit-making intention, a similar segmentation of patients will immensely benefit the healthcare providers, mainly for the following reasons:

  • Customizing the patient care based on their behavior profiles
  • Enabling a stronger patient engagement
  • Providing the backbone for data-driven decisions on patient profile
  • Performing advanced medical research like launching a new vaccine or trial

The benefits are obvious and individual hospitals may add more points to the above list; the rest of the article is about how to perform the patient segmentation using data mining techniques.

Data Mining for Patient Segmentation

In Data Mining a, segmentation or clustering algorithm will iterate over cases in a dataset to group them into clusters that contain similar characteristics. These groupings are useful for exploring data, identifying anomalies in the data, and creating predictions. Clustering is an unsupervised data mining (machine learning) technique used for grouping the data elements without advance knowledge of the group definitions.

K-means clustering is a well-known method of assigning cluster membership by minimizing the differences among items in a cluster while maximizing the distance between clusters. Clustering algorithm first identifies relationships in a dataset and generates a series of clusters based on those relationships. A scatter plot is a useful way to visually represent how the algorithm groups data, as shown in the following diagram. The scatter plot represents all the cases in the dataset, and each case is a point on the graph. The cluster points on the graph illustrate the relationships that the algorithm identifies.

AIOps Artificial Intelligence for IT Operations

One of the important parameters for a K-Means algorithm is the number of clusters or the cluster count. We need to set this to a value that is meaningful to the business problem that needs to be solved. However, there is good support in the algorithm to find the optimal number of clusters for a given data set, as explained next.

To determine the number of clusters for the algorithm to use, we can use a plot of the within cluster’s sum of squares, by the number of clusters extracted. The appropriate number of clusters to use is at the bend or ‘elbow’ of the plot. The Elbow Method is one of the most popular methods to determine this optimal value of k i.e. the number of clusters. The following code creates a curve.

AIOps Digital Transformation Solutions
AI Devops Automation Service Tools

In this example, based on the graph, it looks like k = 4 would be a good value to try.

Reference Patient Segmentation Using K-Means Algorithm in GAVS Rhodium Platform

In GAVS Rhodium Platform, which helps healthcare providers with Patient Data Management and Patient Data Sharing, there is a reference implementation of Patient Segmentation using K-Means algorithm. The following are the attributes that are used based on a publicly available Patient admit data (no personal information used in this data set). Again in the reference implementation sample attributes are used and in a real scenario consulting with healthcare practitioners will help to identify the correct attributes that is used for clustering.

 To prepare the data for clustering patients, patients must be separated along the following dimensions:

  • HbA1c: Measuring the glycated form of hemoglobin to obtain the three-month average of blood sugar.
  • Triglycerides: Triglycerides are the main constituents of natural fats and oils. This test indicates the amount of fat or lipid found in the blood.
  • FBG: Fasting Plasma Glucose test measures the amount of glucose levels present in the blood.
  • Systolic: Blood Pressure is the pressure of circulating blood against the walls of Blood Vessels. This test relates to the phase of the heartbeat when the heart muscle contracts and pumps blood from the chambers into the arteries.
  • Diastolic: The diastolic reading is the pressure in the arteries when the heart rests between beats.
  • Insulin: Insulin is a hormone that helps move blood sugar, known as glucose, from your bloodstream into your cells. This test measures the amount of insulin in your blood.
  • HDL-C: Cholesterol is a fat-like substance that the body uses as a building block to produce hormones. HDL-C or good cholesterol consists primarily of protein with a small amount of cholesterol. It is considered to be beneficial because it removes excess cholesterol from tissues and carries it to the liver for disposal. The test for HDL cholesterol measures the amount of HDL-C in blood.
  • LDL-C: LDL-C or bad cholesterol present in the blood as low-density lipoprotein, a relatively high proportion of which is associated with a higher risk of coronary heart disease. This test measures the LDL-C present in the blood.
  • Weight: This test indicates the heaviness of the patient.

The above tests are taken for the patients during the admission process.

The following is the code snippet behind the scenes which create the patient clustering.

Best AIOps Platforms Software

The below is the output cluster created from the above algorithm.

Just from this sample, healthcare providers can infer the patient behavior and patterns based on their creatinine and glucose levels, in real-life situations other different attributes can be used.

AI will play a major role in future healthcare data management and decision making and data mining algorithms like K-Means provide an option to segment the patients based on the attributes which will improve the quality of patient care.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoprability), while bringing more trust and transparency into the healthcare process using patient consent management, credentialing and zero knowledge proofs.

Palo Alto Firewall – DNS Sinkhole

Ganesh Kumar J

Starting with PAN-OS 6.0, DNS sinkhole is an action that can be enabled in Anti-Spyware profiles. A DNS sinkhole can be used to identify infected hosts on a protected network using DNS traffic in environments where the firewall can see the DNS query to a malicious URL.

The DNS sinkhole enables the Palo Alto Networks device to forge a response to a DNS query for a known malicious domain/URL and causes the malicious domain name to resolve to a definable IP address (fake IP) that is given to the client. If the client attempts to access the fake IP address and there is a security rule in place that blocks traffic to this IP, the information is recorded in the logs.

Sample Flow

We need to keep the following in mind before assigning an IP address to DNS sinkhole configuration.

When choosing a “fake IP”, make sure that the IP address is a fictitious IP address that does not exist anywhere inside the network. DNS and HTTP traffic must pass through the Palo Alto Networks firewall for the malicious URL to be detected and for the access to the fake IP to be stopped. If the fake IP is routed to a different location, and not through the firewall, this will not work properly.

Steps:

  1. Make sure the latest Antivirus updates are installed on the Palo Alto Networks device. From the WebUI, go to Device > Dynamic Updates on the left. Click “Check Now” in the lower left, and make sure that the Anti-Virus updates are current. If they are not, please do that before proceeding. The Automatic Updates can be configured if they are not setup.

Fig1.1

IT Automation with AI

Note: A paid Threat Prevention subscription for the DNS sinkhole is required to function properly.

  1. Configure the DNS Sinkhole Protection inside an Anti-Spyware profile. Click on the Objects > Anti-Spyware under Security Profiles on the left.
    Use either an existing profile or create a new profile. In the example below the “alert-all” is being used:

Fig1.2:

Office 365 Migration

Click the name of the profile – alert-all, click on the DNS Signatures tab.

Fig1.3:

Software Test Automation Platform

Change the “Action on DNS queries” to ‘sinkhole’ if it is not already set to sinkhole.
Click on the Sinkhole IPv4 field, either select the default Palo Alto Networks Sinkhole IP (72.5.65.111) or a different IP of your choosing. If you opt to use your own IP, ensure the IP is not used inside your network and preferably not routable over the internet (RFC1918).
Click on Sinkhole IPv6 and enter a fake IPv6 IP. Even if IPv6 is not used, something still needs to be entered. The example shows ::1. Click OK. 

Note: If nothing is entered for the Sinkhole IPv6 field, OK will remain grayed out.

  1. Apply the Anti-Spyware profile on the security policy that allows DNS traffic from the internal network (or internal DNS server) to the internet. Click on Policies> Security on the left side. Inside the rules, locate the rule that allows DNS traffic outbound, click on the name, go to the Actions tab, and make sure that the proper Anti-Spyware profile is selected. Click OK..

Fig1.4:

Software Product Engineering Services

  1. The last thing needed is to have a security rule that will block all web-browsing and SSL access to the fake IP 72.5.65.111 and also :1 if using IPv6. This will ensure to deny traffic to the fake IP from any infected machines.

Fig1.5:

Security Iam Management Tools

  1. Commit the configuration

Fig1.6:

Rpa in Infrastructure Management

(To be continued…)

References:

About the Author –

Ganesh is currently managing Network, Security and engineering team for a large US based customer. He has been associated with the Network & Security domain for more than 15 years.

Container Security

Anandharaj V

We live in a world of innovation and are beneficiaries of new advancements. New advancements in software technology also comes with potential security vulnerabilities.

‘Containers’ are no exception. Let us first understand what a container is and then the vulnerabilities associated with it and how to mitigate them.

What is a Container?

You might have seen containers in the shipyard. It is used to isolate different cargos which is transported via ships. In the same way, software technologies use a containerization approach.

Containers are different from Virtual Machines (VM) where VMs need a guest operating system which runs on a host operating system (OS). Containers uses OS virtualization, in which required processes, CPU, Memory, and disk are virtualized so that containers can run without a separate operating system.

In containers, software and its dependencies are packaged so that it can run anywhere whether on-premises desktop or in the cloud.

IT Infrastructure Managed Services

Source: https://cloud.google.com/containers

As stated by Google, “From Gmail to YouTube to Search, everything at Google runs in containers”.

Container Vulnerabilities and Countermeasures

Containers Image Vulnerabilities

While creating a container, an image may be patched without any known vulnerabilities. But a vulnerability might have been discovered later, while the container image is no longer patched. For traditional systems, it can be patched when there is a fix for the vulnerability without making any changes but for containers, updates should be upstreamed in the images, and then redeployed. So, containers have vulnerabilities because of the older image version which is deployed.

Also, if the container image is misconfigured or unwanted services are running, it will lead to vulnerabilities.

Countermeasures

If you use traditional vulnerability assessment tools to assess containers, it will lead to false positives. You need to consider a tool that has been designed to assess containers so that you can get actionable and reliable results.

To avoid container image misconfiguration, you need to validate the image configuration before deploying.

Embedded Malware and Clear Text Secrets

Container images are collections of files packaged together. Hence, there are chances of malicious files getting added unintentionally or intentionally. That malicious software will have the same effect as of the traditional systems.

If secrets are embedded in clear text, it may lead to security risks if someone unauthorized gets access.

Countermeasures

Continuous monitoring of all images for embedded malware with signature and behavioral detection can mitigate embedded malware risks.

 Secrets should never be stored inside of containers image and when required, it should be provided dynamically at runtime.

Use of Untrusted Images

Containers have the advantages of ease of use and portability. This capability may lead teams to run container images from a third party without validating it and thus can introducing data leakage, malware, or components with known vulnerabilities.

Countermeasures

Your team should maintain and use only trusted images, to avoid the risk of untrusted or malicious components being deployed.

Registry Risks

Registry is nothing but a repository for storing container images.

  1. Insecure connections to registries

Images can have sensitive information. If connections to registries are performed over insecure channels, it can lead to man-in-the-middle attacks that could intercept network traffic to steal programmer or admin credentials to provide outdated or fraudulent images.

You should configure development tools and containers while running, to connect only over the encrypted medium to overcome the unsecured connection issue.

  1. Insufficient authentication and authorization restrictions

As we have already seen that registries store container images with sensitive information. Insufficient authentication and authorization will result in exposure of technical details of an app and loss of intellectual property. It also can lead to compromise of containers.

Access to registries should authenticated and only trusted entities should be able to add images and all write access should be periodically audited and read access should be logged. Proper authorization controls should be enabled to avoid the authentication and authorization related risks.

Orchestrator Risks

  1. Unbounded administrative access

There are many orchestrators designed with an assumption that all the users are administrators but, a single orchestrator may run different apps with different access levels. If you treat all users as administrators, it will affect the operation of containers managed by the orchestrator.

Orchestrators should be given the required access with proper role-based authorization to avoid the risk of unbounded administrative access.

  1. Poorly separated inter-container network traffic

In containers, traffic between the host is routed through virtual overlay networks. This is managed by the orchestrator. This traffic will not be visible to existing network security and management tools since network filters only see the encrypted packets traveling between the hosts and will lead to security blindness. It will be ineffective in monitoring the traffic.

To overcome this risk, orchestrators need to configure separate network traffic as per the sensitivity levels in the virtual networks.

  1. Orchestrator node trust

You need to give special attention while maintaining the trust between the hosts, especially the orchestrator node. Weakness in orchestrator configuration will lead to increased risk. For example, communication can be unencrypted and unauthenticated between the orchestrator, DevOps personnel, and administrators.

To mitigate this, orchestration should be configured securely for nodes and apps. If any node is compromised, it should be isolated and removed without disturbing other nodes.

Container Risks

  1. App vulnerabilities

It is always good to have a defense. Even after going through the recommendations, we have seen above; containers may still be compromised if the apps are vulnerable.

As we have already seen that traditional security tools may not be effective when you use it for containers. So, you need a container aware tool which will detect behavior and anomalies in the app at run time to find and mitigate it.

  1. Rogue containers

It is possible to have rogue containers. Developers may have launched them to test their code and left it there. It may lead to exploits as those containers might not have been thoroughly checked for security loopholes.

You can overcome this by a separate environment for development, test, production, and with a role-based access control.

Host OS Risks

  1. Large attack surface

Every operating system has its attack surface and the larger the attack surface, the easier it will be for the attacker to find it and exploit the vulnerability and compromise the host operating system and the container which run on it.

You can follow the NIST SP 800-123 guide to server security if you cannot use container specific operating system to minimize the attack surface.

  1. Shared kernel

If you only run containers on a host OS you will have a smaller attack surface than the normal host machine where you will need libraries and packages when you run a web server or a database and other software.

You should not mix containers and non-containers workload on the same host machine.

If you wish to further explore this topic, I suggest you read NIST.SP.800-190.


References

About the Author –

Anandharaj is a lead DevSecOps at GAVS and has over 13 years of experience in Cybersecurity across different verticals which include Network Security, application Security, computer forensics and cloud security.

IAST: A New Approach to Finding Security Vulnerabilities

Roberto Velasco
CEO, Hdiv Security

One of the most prevalent misconceptions about cybersecurity, especially in the mainstream media and also among our clients, is that to conduct a successful attack against an IT system it is necessary to ‘investigate’ and find a new defect in the target’s system.

However, for most security incidents involving internet applications, it is enough to simply exploit existing and known programming errors.

For instance, the dramatic Equifax breach could have been prevented by following basic software security best-practices, such as patching the system to prevent known vulnerabilities. That was, in fact, one of the main takeaways from the forensic investigation led by the US federal government.

One of the most important ways to reduce security risks is to ensure that all known programming errors are corrected before the system is exposed to internet traffic. Research bodies such as the US NIST found that correcting security bugs early on is orders of magnitude cheaper than doing so when the development has been completed.

When composing a text in a text editor, the spelling and grammar corrector highlights the mistakes in the text. Similarly, there are security tools known as AST (Application Security Testing) that find programming errors that introduce security weaknesses. ASTs report the file and line where the vulnerability is located, in the same way, that a text editor reports the page and the line that contains a typo.

In other words, these tools allow developers to build software that is largely free of security-related programming errors, resulting in more secure applications.

Just like it is almost impossible to catch all errors in a long piece of text, most software contains many serious security vulnerabilities. The fact that some teams do not use any automated help at all, makes these security weaknesses all the most prevalent and easy to exploit.

Let’s take a look at the different types of security issue detection tools also known as ASTs, or vulnerability assessment tools, available in the market.

The Traditional Approach

Two mature technologies capture most of the market: static code analysis (SAST) and web scanners (dynamic analysis or DAST). Each of these two families of tools is focused on a different execution environment.

The SAST static analysis, also known as white-box analysis because the tool has access to the source code of the application, scans the source code looking for known patterns that indicate insecure programming that could lead to a vulnerability.

The DAST dynamic analysis replicates the view of an attacker. At this point, the tool executes hundreds or thousands of queries against the application designed to replicate the activity of an attacker to find security vulnerabilities. This is a black-box analysis because the point of view is purely external, with no knowledge of the application’s internal architecture.

The level of detail provided by the two types of tools is different. SAST tools provide file and line where the vulnerability is located, but no URL, while DAST tools provide the external URL, but no details on the location of the problem within the code base of the application. Some teams use both tools to improve visibility, but this requires long and complex triaging to manage the vulnerabilities.

The Interactive AST Approach

The Interactive Application Security Testing (IAST) tools combine the static approach and the dynamic approach. They have access to the internal structure of the application, and to the way it behaves with actual traffic. This privileged point of view is ideal to conduct security analysis.

From an architecture point of view, the IAST tools become part of the infrastructure that hosts the web applications, because an IAST runs together with the application server. This approach is called instrumentation, and it is implemented by a component known as an agent. Other platforms such as Application Performance Monitoring tools (APMs) share this proven approach.

Once the agent has been installed, it incorporates automatic security sensors in the critical execution points of the application. These sensors monitor the dataflow between requests and responses, the external components that the application includes, and data operations such as database access. This broad-spectrum coverage is much better than the visibility that SAST and DAST rely on.

In terms of specific results, we can look at two important metrics – how many types of vulnerabilities the tool finds, and how many of the identified vulnerabilities are false positives. Well, the best DAST is able to find only 18% of the existing vulnerabilities on a test application. And even worse, around 50% of the vulnerabilities reported by the best SAST static analysis tool are not true problems!

IT Automation with AI

Source: Hdiv Security via OWASP Benchmark public result data

The IAST approach provides these tangible benefits:

  1. Complete coverage, because the entire application is reviewed, both the custom code and the external code, such as open-source components and legacy dependencies.
  2. Flexibility, because it can be used in all environments; development, quality assurance (QA), and production.
  3. High accuracy, because the combination of static and dynamic point of views allow us to find more vulnerabilities with no false positives.
  4. Complete vulnerability information, including the static aspects (source code details) and dynamic aspects (execution details).
  5. Reduction of the duration of the security verification phase, so that the time-to-market of the secure applications is shorter.
  6. Compatible with agile development methodologies, such as DevSecOps, because it can be easily automated, and reduces the manual verification activities

IAST tool can add tons of value to the security tooling of any organization concerned with the security of the software.

In the same way that everyone uses an automated spell checker to find typos in a document, we believe that any team would benefit from an automated validation of the security of an application.

However, the AST does not represent a security utopia, since they can only detect security problems that follow a common pattern.

About the Author –

Roberto Velasco is the CEO of Hdiv Security. He has been involved with the IT and security industry for the past 16 years and is experienced in software development, software architecture and application security across different sectors such as banking, government and energy. Prior to founding Hdiv Security, Roberto worked for 8 years as a software architect and co-founded ARIMA, a company specialized in software architecture. He regularly speaks at Software Architecture and cybersecurity conferences such as Spring I/O and APWG.eu.

Reduce Test Times and Increase Coverage with AI & ML

Kevin Surace

Chairman & CTO, Appvance.ai

With the need for frequent builds—often many times in a day—QEs can only keep pace through AI-led testing. It is the modern approach that allows quality engineers to create scripts and run tests autonomously to find bugs and provide diagnostic data to get to the root cause.

AI-driven testing means different things to different QA engineers. Some see it as using AI for identifying objects or helping create script-less testing; some consider it as autonomous generation of scripts while others would think in terms of leveraging system data to create scripts which mimic real user activity.

Our research shows that teams who are able to implement what they can in scripts and manual testing have, on average, less than 15% code, page, action, and likely user flow coverage. In essence, even if you have 100% code coverage, you are likely testing less than 15% of what users will do. That in itself is a serious issue.

Starting in 2012, Appvance set out to rethink the concept of QA automation. Today our AIQ Technology combines tens of thousands of hours of test automation machine learning with the deep domain knowledge, the essential business rules, each QE specialist knows about their application. We create an autonomous expert system that spawns multiple instances of itself that swarm over the application testing at the UX and at the API-levels. Along the way these Intelligences write the scripts, hundreds, and thousands of them, that describes their individual journeys through the application.

And why would we need to generate so many tests fully autonomously. Because applications today are 10X the size they were just ten years ago. But your QE team doesn’t have 10X the number of test automation engineers. And because you have 10X less time to do the work than 10 years ago. Just to keep pace with the dev team requires each quality engineer to be 100X more productive than they were 10 years ago.

Something had to change; that something is AI.

AI-testing in two steps

We leveraged AI and witnessed over 90% reduction in human effort to find the same bugs. So how does this work?

It’s really a two-stage process.

First, leveraging key AI capabilities in TestDesigner, Appvance’s codeless test creation system, we make it possible to write scripts faster, identify more resilient accessors, and substantially reduce maintenance of scripts.

With AI alongside you as you implement an automated test case, you get a technology that suggests the most stable accessors and constantly improves and refines them. It also creates “fallback accessors” when tests run and hit an accessor change enabling the script to continue even though changes have been made to the application. And finally, the AI can self-heal scripts which must and update them with new accessors without human assistance. These AI-based, built-in technologies give you the most stable scripts every time with the most robust accessor methodologies and self-healing. Nothing else comes close.

The final two points above deal with autonomous generation of tests. To beat the queue and crush it, you have to get a heavy lift for finding bugs. And as we have learnt, go far beyond the use cases that a business analyst listed. Job one is to find bugs and prioritize them, leveraging AI to generate tests autonomously.

Appvance’s patented AI engine has already been trained with millions of actions. You will teach it the business rules of your application (machine learning). It will then create real user flows, take every possible action, discover every page, fill out every form, get to every state, and validate the most critical outcomes just as you trained it to do. It does all this without writing or recording a single script. We call this is ‘blueprinting’ an application. We do this at every new build. Multiple instances of the AI will spin up, each selecting a unique path through the application, typically finding 1000s or more flows in a matter of minutes. When complete, the AI hands you the results including bugs, all the diagnostic data to help find the root cause, and the reusable test-scripts to repeat the bug. A further turn of the crank can refine these scripts into exact replicas of what production users are doing and apply them to the new build. Any modern approach to continuous testing needs to leverage AI in both helping QA engineers create scripts as well as autonomously create tests so that both parts work together to find bugs and provide data to get to the root cause. That AI driven future is available today from Appvance.

About the Author –

Kevin Surace is a highly lauded entrepreneur and innovator. He’s been awarded 93 worldwide patents, and was Inc. Magazine Entrepreneur of the Year, CNBC Innovator of the Decade, a Davos World Economic Forum Tech Pioneer, and inducted into the RIT Innovation Hall of Fame. Kevin has held leadership roles with Serious Energy, Perfect Commerce, CommerceNet and General Magic and is credited with pioneering work on AI virtual assistants, smartphones, QuietRock and the Empire State Building windows energy retrofit.

Business Intelligence Platform RESTful Web Service

Albert Alan

Restful API

RESTful Web Services are REST architecture based web services. Representational State Transfer (REST) is a style of software architecture for distributed systems such as the World Wide Web. In this architectural style, data and functionality is considered resources and are accessed using Uniform Resource Identifiers (URIs), typically links on the Web.

RESTful Web Service

REST has some advantages over SOAP (Simple Objects Access Protocol) but is similar in technology since it is also a function call via HTTP protocol. REST is easier to call from various platforms, transfers pure human-readable data in JSON or XML and is faster and saves resources.

In the basic idea of REST, an object is accessed via REST, not its methods. The state of the object can be changed by the REST access. The change is caused by the passed parameters. A frequent application is the connection of the SAP PI via the REST interface.

When to use Rest Services

  • You want to access BI platform repository objects or perform basic scheduling.
  • You want to use a programming language that is not supported by another BI platform SDK.
  • You want to extract all the query details and number of records per query for all the reports like Webi and Crystal, etc.
  • You want to extract folder path of all reports at once.

Process Flow

RESTful Web Service

RESTful Web Service Requests

To make a RESTful web service request, you need the following:

  • URL – The URL that hosts the RESTful web service.
  • Method – The type of HTTP method to use for sending the request, for example GET, PUT, POST, or DELETE.
  • Request header – The attributes that describe the request.
  • Request body – Additional information that is used to process the request.

Common RWS Error Messages

RESTful Web Service

Restful Web Service URIs Summary List

URLResponseComments
  /v1Service document that contains a link to the /infostore API.This is the root level of an infostore resource
  /v1/infostoreFeed contains all the objects in BOE system/v1/infostore
  /v1/infostore/ <object_id>Entry corresponding to the info object with SI_ID=./v1/infostore/99
      /v1/logon/longReturns the long form for logon, which contains the user and password authentication template.Used to logon to the BI system based on the authentication method.
  /v1/users/ <user_id>  XML feed of user details in BOE systemYou can Modify user using PUT method and DELETE user using DELETE method.
    /v1/usergroups/ <usergroup_id>    XML feed of user group details in BOE systemSupport GET and PUT and DELETE method. You can Modify user group using PUT method and DELETE user group using DELETE method.
  v1/folders/ <folder_id>XML feed displays the details of the folder, can be used to modify the details of the folder, and delete the folder.You modify the folder using PUT method and DELETE the folder using DELETE method
  /v1/publicationsXML feed of all publications created in BOE systemThis API supports GET method only.

Extended Workflow

 The workflow is as follows:

  • To Pass the Base URL

GET http:///localhost:6405/biprws/v1/users

  • To Pass the Headers

  • To Get the xml/json response

Automation of Rest Call

The Business Intelligence platform RESTful Web Service  (BI-REST-SDK) allows you to programmatically access the BI platform functionalities such as administration, security configuration and modification of the repository. In addition, to the Business Intelligence platform RESTful web service SDK, you can also use the SAP Crystal Reports RESTful Web Services  (CR REST SDK) and SAP Web Intelligence RESTful Web Services (WEBI REST SDK).

Implementation

An application has been designed and implemented using Java to automate the extraction of SQL query for all the webi reports from the server at once.

Tools used:

  • Postman (Third party application)
  • Eclipse IDE

The structure of the application is as below:

The application file comprises of the required java jar files, java class files, java properties files and logs. Java class files (SqlExtract) are the source code and will be compiled and executed using command prompt as:

Step 1

  • Javac -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command compiles the java code.

Step 2

  • Java -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command runs the compiled java file.

The java properties file (log4j) is used to set the configurations for the java code to run. Also, the path for the log file can be set in the properties file.

RESTful Web Service

The logs (SqlExtractLogger) consist of the required output file with all the extracted query for the webi reports along with the data source name, type and the row count for each query in the respective folder in the path set by the user in properties file.

RESTful Web Service

The application is standalone and can run in any windows platform or server which has java JRE (version greater than 1.6 – preferred) installed in it.

Note: All the above steps required to execute the application are consolidated in the (steps) file.

Conclusion

SAP BO provides Restful web service to traverse through its repository, to fetch structural info and to modify the metadata structure based on the user requirements. When integrated with programming languages like python, java, etc., extends the scope to a greater extent, allowing the user to automate the workflows and to solve the backtracking problems.

Handling Restful web service needs expertise in server administration and programming as changes made to the metadata are irreversible.

References

About the Author –

Alan is a SAP Business Intelligence consultant with a critical thinking and an analytical mind. He believes in ‘The more extensive a man’s knowledge of what has been done, the greater will be his power of knowing what to do’.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

Significance of CI CD Process in DevOps

Muraleedharan Vijayakumar

Developing and releasing software can be a complicated process, especially as applications, teams, and deployment infrastructure grow in complexity themselves. Often, challenges become more pronounced as projects grow. To develop, test, and release software quickly and consistently, developers and organizations have created distinct strategies to manage and automate these processes.

Did you know?  Amazon releases a new production code once every 11.6 seconds.

Why CI/CD/CD?

The era of digital transformations demands faster deployments into production. Faster deployments do not warrant defective releases, the solution – ‘DevOps’. The development team, operations team, and IT services team have to work in tandem and the magic circle that brings all of them together is DevOps.

To adopt a DevOps culture, implementing the right DevOps tools with the right DevOps process is essential. Continuous integration/continuous delivery/continuous deployment (CI/CD/CD) help us developers and testers ship the software faster and safer in a structured environment.

The biggest obstacle that needs to be overcome in constructing a DevOps environment is scalability. There are no definite measures on the scalability of an application or product development, but DevOps environment should be ready to scale to meet business and technology needs. It lays a strong foundation for building an agile DevOps for the business.

Continuous Integration and Deployment has seen many benefits in the software delivery process. Initiating automated code builds once checks are completed, running automated test suites, flagging errors and breaking builds if not adhered to compliance have eased the way of deploying a stable release into staging or production environment and eliminating manual errors and human bias.

How is CI/CD/CD Set Up?

Version control tools play an important role in the success of our DevOps pipeline. And designing a good source stage is pivotal to our CI/CD success. It ensures that we can version code, digital assets, and binary files (and more) all in one spot. This enables teams to communicate and collaborate better — and deploy faster.

Our code branching strategy determines how and when developers branch and merge. When deciding on a strategy it is important to evaluate what makes sense for our team and product. Most version control systems will let you adopt and customize standard strategies like mainline, trunk-based, task/feature branching, etc.,

Typical Branching Model Followed

A basic workflow starts with code being checked out. When the work in the branch is committed, CI processes are triggered. This can be done with a merge or pull request. Then the CI/CD pipeline kicks into high gear.

The goal of CI/CD is to continuously integrate changes to find errors earlier in the process, as known as ‘Shift Left’.  The ultimate goal of having an automated CI/CD process in place to identify errors or flag non-compliance at an early stage of the development process. This increases the project’s velocity by avoiding late-stage defects and delays. It creates an environment where code is always ready for a release. With the right branching strategy, teams are equipped to deliver success.

Continuous Integration: Integrating newly developed code with the central repository is continuous integration. Automated CI results in automated builds that are triggered to merge the newly developed codes into the repository. As part of this process, plugins can be added to perform static code analysis, security compliance checks, etc., to identify if the newly added code would have any impact on the application. If there are compliance issues, the automated build breaks, and the same is reflected to the developer with insights. Automated CI helps in increasing the productivity of the developers and the team.

Continuous Delivery: At the end of a successful CI, Continuous Delivery is triggered. CD ensures to automate the software delivery process and commits to deliver the integrated code into the production stage without any bugs or delays. CD helps in merging the newly developed code into the main branch of the software so that a ready to production product is available with all the checks in place.CD also checks the quality of the code and performs tests to check whether it can release the functional build to the production environment.

Continuous Deployment: The final and most critical part of DevOps is Continuous Deployment. After the successful merging of certified code, the pipelines are triggered to deploy the code into the production environment. These pipelines are also triggered automatically. The pipelines are constructed to handle the target environment be it jar or container deployments. The most important aspect of this pipeline is to tag the releases that are also done in the production environment. If there are rollbacks these tags help the team to roll back to the right version of the build.

CI/CD/CD is an art that needs to be crafted in the right and most efficient way that will help the software development team achieve their success at a faster pace.

Different Stages & Complete DevOps Setup

What is the CI/CD/CD  Outcome?

Cyber Security Mdr Services

About the Author –

Murleedharan is a senior technical manager and has managed, developed, and launched cutting edge business intelligence and analytics platforms using big data technologies. He has experience in hosting the platform in Microsoft Azure by leveraging the MS PaaS. He is a product manager for zDesk – A Virtual Desktop offering from GAVS.
His passion is to get a friction-less DevOps operational in an environment to bring down the deployment time to a few seconds.

Design-led Organization: Creative Thinking as a Practice!

Gogul R G

This is the first article in the series of ‘Design-led organization’ writing about creative thinking as a practice in GAVS. It is the first step for the readers to explore the world of design and creativity. So, let’s get started!

First let’s see what is design thinking is all about

There is a common misconception that design thinking is new. But when you look back, people have applied a human-centric creative process to build meaningful and effective solutions. Design has been practiced for ages to build monuments, bridges, automobiles, subway systems, etc. Design is not only limited to aesthetics, it is more of a mindset to think of a solution. Design thinking is a mindset to iteratively think about a complex problem and come up with a viable solution

Thinking outside of the box can provide an innovative solution to a sticky problem. However, thinking outside of the box can be a real challenge as we naturally develop patterns of thinking that are based on the repetitive activities and commonly accessed knowledge surround ourselves. It takes something to detach away from a situation where we’re too closely involved to be able to find better possibilities.

To illustrate how a fresh way of thinking can create unexpectedly good solutions, let’s look at a famous incident. Some years ago, an incident occurred where a truck driver had tried to pass under a low bridge. But, he failed, and the truck became firmly lodged under the bridge.

IT Infrastructure Managed Services

The driver was unable to continue driving through or reverse out. The struck truck caused massive traffic problems, which resulted in emergency personnel, engineers, firefighters, and truck drivers gathering to negotiate various solutions to dislodge the truck.

Emergency workers were debating whether to dismantle parts of the truck or chip away at parts of the bridge. Each of one were looking for a solution with their respective level of expertise. A boy walking by and witnessing the intense debate looked at the truck, at the bridge, then looked at the road and said, “Why not just let the air out of the tires?” to the absolute amazement of all the specialists and experts trying to resolve the issue.

When the solution was tested, the truck could drive with ease, having suffered only the damage caused by its initial attempt to pass underneath the bridge. It symbolizes the struggles we face where often the most obvious solutions are the ones hardest to come by because of the self-imposed constraints we work within.  

“Challenging our assumptions and everyday knowledge is often difficult for us humans, as we rely on building patterns of thinking in order not to have to learn everything from scratch every time.

Let’s come back to our topic “What is Design thinking?” Tim Brown, Executive Chairman of IDEO – an international design and consulting firm quoted design thinking as below.

“Design thinking is a human-centered approach to innovation that draws from the designer’s toolkit to integrate the needs of people, the possibilities of technology, and the requirements for business success.

Now let’s think about our truck example. A boy with his fresh mindset provides a simple solution to address a complex problem. Yeah! this is the sweet spot. Everyone is creative and capable of thinking like a designer, and out of the box, to come up with a solution. This way of inculcating design as a mindset for a solution is known as Design thinking.

Yes, you read it right, everyone is creative…

We forget that back in kindergarten, we were all creative. We all played and experimented with weird things without fear or shame. We didn’t know enough not to. The fear of social rejection is something we learned as we got older. And that’s why it’s possible to regain our creative abilities, even decades later. In the field of design and user experience, there are individuals to stick with a methodology a while, they will end up doing amazing things. They come up with break through ideas or suggestions and work creatively with a team to develop something truly innovative. They surprise themselves with the realization that they are a lot more creative than they had thought. That early success shakes up how they see themselves and makes them eager to do more.

We just need to rediscover what we already have: the capacity to imagine, or build upon, new to the world ideas.  But the real value of creativity doesn’t emerge until you are brave enough to act on those ideas.

Geshe Thupten Jinpa, who has been the Dalai Lama’s chief English translator for more than twenty years, shared an insight about the nature of creativity. Jinpa pointed out that there’s no word in the Tibetan language for ‘creativity’ or ‘being creative’. The closest translation is ‘natural’. In other words, if you want to be more creative, you should be more natural! So…be natural!

At your workplace, the complex problems can be easily sorted out when you find a solution using creativity with the mindset of design thinking. Creativity can be improved by following the below steps.

  1. Go for a walk.
  2. Play your favorite games.
  3. Move your eyes.
  4. Take a break and enjoy yourself.
  5. Congratulate yourself each time you do something well.
  6. Estimate time, distance, and money.
  7. Take a route you never have taken before.
  8. Look for images in mosaics, patterns, textures, clouds, stars…
  9. Try something you have never done before.
  10. Do a creative exercise.
  11. Start a collection (stamps, coins, art, stationery, anything you wish to collect)
  12. Watch Sci-Fi or fantasy films.
  13. Change the way you do things – there are no routine tasks, only routine way of doing things.
  14. Wear a color you do not like.
  15. Think about how they invented equipment or objects you use daily.
  16. Make a list of 10 things you think are impossible to do and then imagine how you could make each one possible.
  17. For every bad thing that happens to you, remember at least 3 good things that happened.
  18. Read something you have not read yet.
  19. Make friends with people on the other side of the world.
  20. When you have an idea, make a note of it, and later check to see if it happened.
  21. Connect a sport with your work.
  22. Try food you never tried before.
  23. Talk to grandparents and relatives and listen to their stories.
  24. Give an incorrect answer to a question.
  25. Find links between people, things, ideas, or facts.
  26. Ask children how to do something and observe their creativity.

Start doing the above-mentioned steps to inculcate a creative mindset and apply it in your day-to-day work. Companies like GE health care, Procter & Gamble, UBER practiced design thinking and implemented in their new product launches and for solving complex problems in their organizations. Be natural to be more creative! When you are more creative, you can apply design thinking for seeking any solution for a complex problem in your work.

This is the first article in the series of Design led Organization in GAVS. Keep watching this space for more articles on design and keep exploring the world of design-thinking!

References:

About the Author –

Gogul is a passionate UX designer with 8+ years of experience into designing experiences for digital channels like Enterprise apps, B2C, B2B apps, Mobile apps, Kiosk, Point of Sale, Endless aisle, telecom products. He is passionate about transforming complex problems into actionable solutions using design.