Tuning Agile Delivery for Customer and Employee Success

Ashish Joseph

What is Agile?

Agile has been very popular in the software development industry for empowering delivery to be more efficient and effective. It is a common misconception for Agile to be thought of as a framework or a process that follows a methodology for software development. But Agile is a set of values and principles. It is a collection of beliefs that teams can use for decision making and optimizing project deliveries. It is customer-centric and flexible, helping teams adapt accordingly. It doesn’t make the decision for the team. Instead, it gives a foundation for teams to make decisions that can result in a stellar execution of the project.

According to the Agile Manifesto, teams can deliver better by prioritizing the following over the other.

  • Individuals and Interactions over process and tools
  • Working Model over Comprehensive Documentation
  • Customer Collaboration over Contract Negotiation
  • Responding to Changes over following a Plan

With respect to Software Development, Agile is an iterative approach to project management which help teams deliver results with measurable customer value. The approach is designed to be faster and ensures the quality of delivery that is aided with periodic customer feedbacks. Agile aims to break down the requirement into smaller portions, results of which can be continuously evaluated with a natural mechanism to respond to changes quickly.

AIOps Artificial Intelligence for IT Operations

Why Agile?

The world is changing, and businesses must be ready to adapt to how the market demands change over time. When we look at the Fortune 500 companies from 1955, 88% of them perished. Nearly half of the S&P 500 companies is forecasted to be replaced every ten years. The only way for organizations to survive is to innovate continuously and understand the pulse of the market every step of the way. An innovative mindset helps organizations react to changes and discover new opportunities the market can offer them from time to time.

Agile helps organizations execute projects in an everchanging environment. The approach helps break down modules for continuous customer evaluation and implement changes swiftly.

The traditional approach to software project management uses the waterfall model, where we Plan, Build, Test, Review and Deploy. But this existing approach would result in iterations in the plan phase whenever there are deviations in the requirement with respect to the market. When teams choose agile, it helps them respond to changes in the marketplace and implement customer feedback without going off the plan. Agile plans are designed in such a manner to include continuous feedback and its corresponding changes. Organizations should imbibe the ability to adapt and respond fast to new and changing market demands. This foundation is imperative for modern software development and delivery.

Is Agile a right fit for my Customer? People who advocate Agile development claim that Agile projects succeed more often than waterfall delivery models. But this claim has not been validated by statistics. A paper titled “How Agile your Project should be?” by Dr. Kevin Thompson from Kevin Thompson Consulting, provides a perspective from a mathematical point of view for both Agile and Waterfall project management. Here both approaches were followed for the same requirements and were also affected by the same unanticipated variables. The paper focused on the statistical evidence to support the validity of both the options to evaluate the fit.

While assessing the right approach, the following questions need to be asked

  • Are the customer requirements for the project complete, clear and stable?
  • Can the project effort estimation be easily predicted?
  • Has a project with similar requirements been executed before?

If the answer to all the above questions are Yes, then Agile is not the approach to be followed.

The Agile approach provides a better return on investment and risk reduction when there is high uncertainty of different variables in the project. When the uncertainty is low, waterfall projects tend to be more cost effective than agile projects.

Optimizing Agile Customer Centricity

Customer centricity should be the foundation of all project deliveries. This help businesses align themselves to the customer’s mission and vision with respect to the project at hand. While we consider an Agile approach to a project in a dynamic and changing environment, the following are some principles that can help organizations align themselves better with their customer goals.

  • Prioritizing Customer Satisfaction through timely and continuous delivery of requirements.
  • Openness to changing requirements, regardless of the development phase, to enable customers to harness the change for their competitive advantage in the market.
  • Frequent delivery of modules with a preference towards shorter timelines.
  • Continuous collaboration between management and developers to understand the functional and non-functional requirements better.
  • Measuring progress through the number of working modules delivered.
  • Improving velocity and agility in delivery by concentrating on technical excellence and good design.
  • Periodic retrospection at the end of each sprint to improve delivery effectiveness and efficiency.
  • Trusting and supporting motivated individuals to lead projects on their own and allowing them to experiment.

Since Agile is a collection of principles and values, its real utility lies in giving teams a common foundation to make good decisions with actionable intelligence to deliver measurable value to their customers.

Agile Empowered Employee Success

A truly Agile team makes their decisions based on Agile values and principles. The values and principles have enough flexibility to allow teams to develop software in the ways that work best for their market situation while providing enough direction to help them to continually move towards their full potential. The team and employee empowerment through these values and principles aid in the overall performance.

Agile not only improves the team but also the environment around which it is established by helping employees to be compliant with respect to audit and governance.  It reduces the overall project cost for dynamic requirements and focuses on technical excellence along with an optimized process for its delivery. The 14th Annual State of Agile Report 2020 published by StateofAgile.com surveyed 40,000 Agile executives to get insights into the application of Agile across different areas of enterprises. The report surveyed different Agile techniques that contributed most towards the employee success of the organization. The following are some of the most preferred Agile techniques that helped enhance the employee and team performances.

Best AI Auto Discovery Tools

All the above Agile techniques help teams and individuals to introspect their actions and understand areas of improvement in real time with periodic qualitative and quantitative feedback. Each deliverable from multiple cross functional teams can be monitored, tracked and assessed under a single roof. All these techniques collectively bring together an enhanced form of delivery and empower each team to realize their full potential.
Above all, Agile techniques help teams to feel the pulse of the customer every step of the way. The openness to change regardless of the phase, helps them to map all the requirements leading to an overall customer satisfaction coupled with employee success.

Top 5 Agile Approaches

Best AIOps Platforms Software

A Truly Agile Organization

Majority of the Agile approach has been concentrated towards development, IT, and Operations. However, organizations should strive towards effective alignment and coordination across all departments. Organizations today are aiming for greater expansion of agility into areas beyond building, deploying, and maintaining software. At the end of the day, Agile is not about the framework. It is all about the Agile values and principles the organizations believe in for achieving their mission and vision in the long run.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management. He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Why is AIOps an Industrial Benchmark for Organizations to Scale in this Economy?

Ashish Joseph

Business Environment Overview

In this pandemic economy, the topmost priorities for most companies are to make sure the operations costs and business processes are optimized and streamlined. Organizations must be more proactive than ever and identify gaps that need to be acted upon at the earliest.

The industry has been striving towards efficiency and effectivity in its operations day in and day out. As a reliability check to ensure operational standards, many organizations consider the following levers:

  1. High Application Availability & Reliability
  2. Optimized Performance Tuning & Monitoring
  3. Operational gains & Cost Optimization
  4. Generation of Actionable Insights for Efficiency
  5. Workforce Productivity Improvement

Organizations that have prioritized the above levers in their daily operations require dedicated teams to analyze different silos and implement solutions that provide the result. Running projects of this complexity affects the scalability and monitoring of these systems. This is where AIOps platforms come in to provide customized solutions for the growing needs of all organizations, regardless of the size.

Deep Dive into AIOps

Artificial Intelligence for IT Operations (AIOps) is a platform that provides multilayers of functionalities that leverage machine learning and analytics.  Gartner defines AIOps as a combination of big data and machine learning functionalities that empower IT functions, enabling scalability and robustness of its entire ecosystem.

These systems transform the existing landscape to analyze and correlate historical and real-time data to provide actionable intelligence in an automated fashion.

Data Center Migration Planning Tools

AIOps platforms are designed to handle large volumes of data. The tools offer various data collection methods, integration of multiple data sources, and generate visual analytical intelligence. These tools are centralized and flexible across directly and indirectly coupled IT operations for data insights.

The platform aims to bring an organization’s infrastructure monitoring, application performance monitoring, and IT systems management process under a single roof to enable big data analytics that give correlation and causality insights across all domains. These functionalities open different avenues for system engineers to proactively determine how to optimize application performance, quickly find the potential root causes, and design preventive steps to avoid issues from ever happening.

AIOps has transformed the culture of IT war rooms from reactive to proactive firefighting.

Industrial Inclination to Transformation

The pandemic economy has challenged the traditional way companies choose their transformational strategies. Machine learning-powered automations for creating an autonomous IT environment is no longer a luxury. The usage of mathematical and logical algorithms to derive solutions and forecasts for issues have a direct correlation with the overall customer experience. In this pandemic economy, customer attrition has a serious impact on the annual recurring revenue. Hence, organizations must reposition their strategies to be more customer-centric in everything they do. Thus, providing customers with the best-in-class service coupled with continuous availability and enhanced reliability has become an industry standard.

As reliability and scalability are crucial factors for any company’s growth, cloud technologies have seen a growing demand. This shift of demand for cloud premises for core businesses has made AIOps platforms more accessible and easier to integrate. With the handshake between analytics and automation, AIOps has become a transformative technology investment that any organization can make.

As organizations scale in size, so does the workforce and the complexity of the processes. The increase in size often burdens organizations with time-pressed teams having high pressure on delivery and reactive housekeeping strategies. An organization must be ready to meet the present and future demands with systems and processes that scale seamlessly. This why AIOps platforms serve as a multilayered functional solution that integrates the existing systems to manage and automate tasks with efficiency and effectivity. When scaling results in process complexity, AIOps platforms convert the complexity to effort savings and productivity enhancements.

Across the industry, many organizations have implemented AIOps platforms as transformative solutions to help them embrace their present and future demand. Various studies have been conducted by different research groups that have quantified the effort savings and productivity improvements.

The AIOps Organizational Vision

As the digital transformation race has been in full throttle during the pandemic, AIOps platforms have also evolved. The industry did venture upon traditional event correlation and operations analytical tools that helped organizations reduce incidents and the overall MTTR. AIOps has been relatively new in the market as Gartner had coined the phrase in 2016.  Today, AIOps has attracted a lot of attention from multiple industries to analyze its feasibility of implementation and the return of investment from the overall transformation. Google trends show a significant increase in user search results for AIOps during the last couple of years.

Data Center Consolidation Initiative Services

While taking a well-informed decision to include AIOps into the organization’s vision of growth, we must analyze the following:

  1. Understanding the feasibility and concerns for its future adoption
  2. Classification of business processes and use cases for AIOps intervention
  3. Quantification of operational gains from incident management using the functional AIOps tools

AIOps is truly visioned to provide tools that transform system engineers to reliability engineers to bring a system that trends towards zero incidents.

Because above all, Zero is the New Normal.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management. He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Patient Segmentation Using Data Mining Techniques

Srinivasan Sundararajan

Srinivasan Sundararajan

Patient Segmentation & Quality Patient Care

As the need for quality and cost-effective patient care increases, healthcare providers are increasingly focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. Simply put, data-driven healthcare is augmenting the human intelligence based on experience and knowledge.

Segmentation is the standard technique used in Retail, Banking, Manufacturing, and other industries that needs to understand their customers to provide better customer service. Customer segmentation defines the behavioral and descriptive profiles of customers. These profiles are then used to provide personalized marketing programs and strategies for each group.

In a way, patients are like customers to healthcare providers. Though the element of quality of care takes precedence than profit-making intention, a similar segmentation of patients will immensely benefit the healthcare providers, mainly for the following reasons:

  • Customizing the patient care based on their behavior profiles
  • Enabling a stronger patient engagement
  • Providing the backbone for data-driven decisions on patient profile
  • Performing advanced medical research like launching a new vaccine or trial

The benefits are obvious and individual hospitals may add more points to the above list; the rest of the article is about how to perform the patient segmentation using data mining techniques.

Data Mining for Patient Segmentation

In Data Mining a, segmentation or clustering algorithm will iterate over cases in a dataset to group them into clusters that contain similar characteristics. These groupings are useful for exploring data, identifying anomalies in the data, and creating predictions. Clustering is an unsupervised data mining (machine learning) technique used for grouping the data elements without advance knowledge of the group definitions.

K-means clustering is a well-known method of assigning cluster membership by minimizing the differences among items in a cluster while maximizing the distance between clusters. Clustering algorithm first identifies relationships in a dataset and generates a series of clusters based on those relationships. A scatter plot is a useful way to visually represent how the algorithm groups data, as shown in the following diagram. The scatter plot represents all the cases in the dataset, and each case is a point on the graph. The cluster points on the graph illustrate the relationships that the algorithm identifies.

AIOps Artificial Intelligence for IT Operations

One of the important parameters for a K-Means algorithm is the number of clusters or the cluster count. We need to set this to a value that is meaningful to the business problem that needs to be solved. However, there is good support in the algorithm to find the optimal number of clusters for a given data set, as explained next.

To determine the number of clusters for the algorithm to use, we can use a plot of the within cluster’s sum of squares, by the number of clusters extracted. The appropriate number of clusters to use is at the bend or ‘elbow’ of the plot. The Elbow Method is one of the most popular methods to determine this optimal value of k i.e. the number of clusters. The following code creates a curve.

AIOps Digital Transformation Solutions
AI Devops Automation Service Tools

In this example, based on the graph, it looks like k = 4 would be a good value to try.

Reference Patient Segmentation Using K-Means Algorithm in GAVS Rhodium Platform

In GAVS Rhodium Platform, which helps healthcare providers with Patient Data Management and Patient Data Sharing, there is a reference implementation of Patient Segmentation using K-Means algorithm. The following are the attributes that are used based on a publicly available Patient admit data (no personal information used in this data set). Again in the reference implementation sample attributes are used and in a real scenario consulting with healthcare practitioners will help to identify the correct attributes that is used for clustering.

 To prepare the data for clustering patients, patients must be separated along the following dimensions:

  • HbA1c: Measuring the glycated form of hemoglobin to obtain the three-month average of blood sugar.
  • Triglycerides: Triglycerides are the main constituents of natural fats and oils. This test indicates the amount of fat or lipid found in the blood.
  • FBG: Fasting Plasma Glucose test measures the amount of glucose levels present in the blood.
  • Systolic: Blood Pressure is the pressure of circulating blood against the walls of Blood Vessels. This test relates to the phase of the heartbeat when the heart muscle contracts and pumps blood from the chambers into the arteries.
  • Diastolic: The diastolic reading is the pressure in the arteries when the heart rests between beats.
  • Insulin: Insulin is a hormone that helps move blood sugar, known as glucose, from your bloodstream into your cells. This test measures the amount of insulin in your blood.
  • HDL-C: Cholesterol is a fat-like substance that the body uses as a building block to produce hormones. HDL-C or good cholesterol consists primarily of protein with a small amount of cholesterol. It is considered to be beneficial because it removes excess cholesterol from tissues and carries it to the liver for disposal. The test for HDL cholesterol measures the amount of HDL-C in blood.
  • LDL-C: LDL-C or bad cholesterol present in the blood as low-density lipoprotein, a relatively high proportion of which is associated with a higher risk of coronary heart disease. This test measures the LDL-C present in the blood.
  • Weight: This test indicates the heaviness of the patient.

The above tests are taken for the patients during the admission process.

The following is the code snippet behind the scenes which create the patient clustering.

Best AIOps Platforms Software

The below is the output cluster created from the above algorithm.

Just from this sample, healthcare providers can infer the patient behavior and patterns based on their creatinine and glucose levels, in real-life situations other different attributes can be used.

AI will play a major role in future healthcare data management and decision making and data mining algorithms like K-Means provide an option to segment the patients based on the attributes which will improve the quality of patient care.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoprability), while bringing more trust and transparency into the healthcare process using patient consent management, credentialing and zero knowledge proofs.

Getting The Best From Healthcare AI

Tim perry

Tim Perry

Co-founder & CIO, Healthcare Too

Advisor to the CIO of AgFirst

Is Healthcare Artificial Intelligence The Answer?

To help explain the future of healthcare Artificial Intelligence (AI) let’s borrow a few lines from Lewis Carroll’s classic Alice in Wonderland:

Alice: Would you tell me, please, which way I ought to go from here?

The Cheshire Cat: That depends a good deal on where you want to get to.

So it is with healthcare AI. It really just depends on where we want to go with healthcare in the US (and globally for that matter). Much of the current conversation seems to be on using AI to improve medical care. Hospitals want to use data from retail clinics, homes, government agencies, and more to predict individual medical needs. Big Tech companies try to apply AI to diagnose diseases better than physicians. Insurers collect massive amounts of data to manage better their risk pool through AI.

AI in Healthcare

A common theme for so many of these healthcare AI scenarios is that AI improves the efficiency of the current system. That improvement is supposedly good for everyone: patients, providers, insurers. And that is also where we get it terribly wrong. If we really want to make the most of healthcare AI investments and promote wellbeing there are two things we must remember:

  1. No one wants to be a patient, but everyone wants to be healthy.
  2. AI offers only point solutions, not a universal truth.

Everyone Wants To Be Healthy

No one wants to be a patient, not even doctors and nurses. The patient experience is painful, frightening, and terribly expensive (in the US anyway). Everyone would much prefer to remain healthy and never see the inside of a hospital. In the US sick care system, however, there is a financial incentive only when there is a diagnosis and treatment. Healthcare AI solutions that do not produce more diagnoses and treatments are not viable in our current sick care system. Like Alice, we must know which way we want to go: more sick care or a new system for health and wellbeing?

AI Offers Only Point Solutions

Artificial Intelligence comes in two basic flavors: 1) General and 2) Narrow. Again, we must plan and invest knowingly to get to where we want to go. These investments over the next 5-10 years will largely determine the direction of Healthcare for decades.

General AI

This is the sexy AI, the stuff we see in science fiction. Computers are so smart that they can address any type of problem decisively and with lightning speed. We use words like “reasoning” or “thinking” when we imagine the power of General AI. As far as our investments and resources go for healthcare AI the General AI option is many years away. We cannot afford to invest in fiction.

Narrow AI

That leaves us to consider narrow AI. These are solutions that are focused on a specific task like search, image analysis, or driving a car. Each is a significant undertaking and requires advanced capabilities. These point solutions in healthcare AI are already underway. Unfortunately, many of the solutions are those that focus on more diagnoses and treatments in the current sick care model. This is not where we want to go.

Healthcare AI For Health

IT Operations Management Software

Focused on Narrow AI, we can envision healthcare where AI promotes health as a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity (as the World Health Organization defines health). There are near countless examples of improving health with AI when we think holistically about real healthcare requirements:

  • Instead of more diagnoses and treatments, what about healthcare AI that weans patients off medications with improvements in nutrition and other social determinants of health?
  • Maybe AI that offers an appropriate personalized spiritual thought based on facial expression, voice tone, or body posture?
  • What about AI for positive online social interactions that help filter negative experiences and protect privacy instead of tracking every movement/action to provide more ads?
  • If we allow AI-driven cars on our roads why not self-driving food trucks with fresh produce and prepared foods for areas we currently call “food deserts”?
  • And just imagine, if you will, an AI that evaluated a person’s current health not only against mountains of conventional medical data from the last hundred years but millennia of data from traditional medical systems like Ayurveda and Traditional Chinese Medicine?

There are countless applications for real healthcare AI. We only need to decide where we are going. Be Well!

About the Author –

Tim Perry, MPA, MS, CPHIMS, CISSP is the Co-Founder & Chief Information Officer of Consumer Health platform HealthCare Too. At present, Tim is an advisor to the CIO of AgFirst and plays a key role in Strategy and Planning of the organization. Over the past 3 decades, Tim has worked in Fortune 50 executive leadership roles as well as startups and has developed a deep passion for transforming healthcare. He is blessed with a wonderful wife and two inspiring children. Tim has practiced Tai Chi (Taiji Chuan) for 20 years and enjoys cooking wholesome (and easy) meals.

Container Security

Anandharaj V

We live in a world of innovation and are beneficiaries of new advancements. New advancements in software technology also comes with potential security vulnerabilities.

‘Containers’ are no exception. Let us first understand what a container is and then the vulnerabilities associated with it and how to mitigate them.

What is a Container?

You might have seen containers in the shipyard. It is used to isolate different cargos which is transported via ships. In the same way, software technologies use a containerization approach.

Containers are different from Virtual Machines (VM) where VMs need a guest operating system which runs on a host operating system (OS). Containers uses OS virtualization, in which required processes, CPU, Memory, and disk are virtualized so that containers can run without a separate operating system.

In containers, software and its dependencies are packaged so that it can run anywhere whether on-premises desktop or in the cloud.

IT Infrastructure Managed Services

Source: https://cloud.google.com/containers

As stated by Google, “From Gmail to YouTube to Search, everything at Google runs in containers”.

Container Vulnerabilities and Countermeasures

Containers Image Vulnerabilities

While creating a container, an image may be patched without any known vulnerabilities. But a vulnerability might have been discovered later, while the container image is no longer patched. For traditional systems, it can be patched when there is a fix for the vulnerability without making any changes but for containers, updates should be upstreamed in the images, and then redeployed. So, containers have vulnerabilities because of the older image version which is deployed.

Also, if the container image is misconfigured or unwanted services are running, it will lead to vulnerabilities.

Countermeasures

If you use traditional vulnerability assessment tools to assess containers, it will lead to false positives. You need to consider a tool that has been designed to assess containers so that you can get actionable and reliable results.

To avoid container image misconfiguration, you need to validate the image configuration before deploying.

Embedded Malware and Clear Text Secrets

Container images are collections of files packaged together. Hence, there are chances of malicious files getting added unintentionally or intentionally. That malicious software will have the same effect as of the traditional systems.

If secrets are embedded in clear text, it may lead to security risks if someone unauthorized gets access.

Countermeasures

Continuous monitoring of all images for embedded malware with signature and behavioral detection can mitigate embedded malware risks.

 Secrets should never be stored inside of containers image and when required, it should be provided dynamically at runtime.

Use of Untrusted Images

Containers have the advantages of ease of use and portability. This capability may lead teams to run container images from a third party without validating it and thus can introducing data leakage, malware, or components with known vulnerabilities.

Countermeasures

Your team should maintain and use only trusted images, to avoid the risk of untrusted or malicious components being deployed.

Registry Risks

Registry is nothing but a repository for storing container images.

  1. Insecure connections to registries

Images can have sensitive information. If connections to registries are performed over insecure channels, it can lead to man-in-the-middle attacks that could intercept network traffic to steal programmer or admin credentials to provide outdated or fraudulent images.

You should configure development tools and containers while running, to connect only over the encrypted medium to overcome the unsecured connection issue.

  1. Insufficient authentication and authorization restrictions

As we have already seen that registries store container images with sensitive information. Insufficient authentication and authorization will result in exposure of technical details of an app and loss of intellectual property. It also can lead to compromise of containers.

Access to registries should authenticated and only trusted entities should be able to add images and all write access should be periodically audited and read access should be logged. Proper authorization controls should be enabled to avoid the authentication and authorization related risks.

Orchestrator Risks

  1. Unbounded administrative access

There are many orchestrators designed with an assumption that all the users are administrators but, a single orchestrator may run different apps with different access levels. If you treat all users as administrators, it will affect the operation of containers managed by the orchestrator.

Orchestrators should be given the required access with proper role-based authorization to avoid the risk of unbounded administrative access.

  1. Poorly separated inter-container network traffic

In containers, traffic between the host is routed through virtual overlay networks. This is managed by the orchestrator. This traffic will not be visible to existing network security and management tools since network filters only see the encrypted packets traveling between the hosts and will lead to security blindness. It will be ineffective in monitoring the traffic.

To overcome this risk, orchestrators need to configure separate network traffic as per the sensitivity levels in the virtual networks.

  1. Orchestrator node trust

You need to give special attention while maintaining the trust between the hosts, especially the orchestrator node. Weakness in orchestrator configuration will lead to increased risk. For example, communication can be unencrypted and unauthenticated between the orchestrator, DevOps personnel, and administrators.

To mitigate this, orchestration should be configured securely for nodes and apps. If any node is compromised, it should be isolated and removed without disturbing other nodes.

Container Risks

  1. App vulnerabilities

It is always good to have a defense. Even after going through the recommendations, we have seen above; containers may still be compromised if the apps are vulnerable.

As we have already seen that traditional security tools may not be effective when you use it for containers. So, you need a container aware tool which will detect behavior and anomalies in the app at run time to find and mitigate it.

  1. Rogue containers

It is possible to have rogue containers. Developers may have launched them to test their code and left it there. It may lead to exploits as those containers might not have been thoroughly checked for security loopholes.

You can overcome this by a separate environment for development, test, production, and with a role-based access control.

Host OS Risks

  1. Large attack surface

Every operating system has its attack surface and the larger the attack surface, the easier it will be for the attacker to find it and exploit the vulnerability and compromise the host operating system and the container which run on it.

You can follow the NIST SP 800-123 guide to server security if you cannot use container specific operating system to minimize the attack surface.

  1. Shared kernel

If you only run containers on a host OS you will have a smaller attack surface than the normal host machine where you will need libraries and packages when you run a web server or a database and other software.

You should not mix containers and non-containers workload on the same host machine.

If you wish to further explore this topic, I suggest you read NIST.SP.800-190.


References

About the Author –

Anandharaj is a lead DevSecOps at GAVS and has over 13 years of experience in Cybersecurity across different verticals which include Network Security, application Security, computer forensics and cloud security.

IAST: A New Approach to Finding Security Vulnerabilities

Roberto Velasco
CEO, Hdiv Security

One of the most prevalent misconceptions about cybersecurity, especially in the mainstream media and also among our clients, is that to conduct a successful attack against an IT system it is necessary to ‘investigate’ and find a new defect in the target’s system.

However, for most security incidents involving internet applications, it is enough to simply exploit existing and known programming errors.

For instance, the dramatic Equifax breach could have been prevented by following basic software security best-practices, such as patching the system to prevent known vulnerabilities. That was, in fact, one of the main takeaways from the forensic investigation led by the US federal government.

One of the most important ways to reduce security risks is to ensure that all known programming errors are corrected before the system is exposed to internet traffic. Research bodies such as the US NIST found that correcting security bugs early on is orders of magnitude cheaper than doing so when the development has been completed.

When composing a text in a text editor, the spelling and grammar corrector highlights the mistakes in the text. Similarly, there are security tools known as AST (Application Security Testing) that find programming errors that introduce security weaknesses. ASTs report the file and line where the vulnerability is located, in the same way, that a text editor reports the page and the line that contains a typo.

In other words, these tools allow developers to build software that is largely free of security-related programming errors, resulting in more secure applications.

Just like it is almost impossible to catch all errors in a long piece of text, most software contains many serious security vulnerabilities. The fact that some teams do not use any automated help at all, makes these security weaknesses all the most prevalent and easy to exploit.

Let’s take a look at the different types of security issue detection tools also known as ASTs, or vulnerability assessment tools, available in the market.

The Traditional Approach

Two mature technologies capture most of the market: static code analysis (SAST) and web scanners (dynamic analysis or DAST). Each of these two families of tools is focused on a different execution environment.

The SAST static analysis, also known as white-box analysis because the tool has access to the source code of the application, scans the source code looking for known patterns that indicate insecure programming that could lead to a vulnerability.

The DAST dynamic analysis replicates the view of an attacker. At this point, the tool executes hundreds or thousands of queries against the application designed to replicate the activity of an attacker to find security vulnerabilities. This is a black-box analysis because the point of view is purely external, with no knowledge of the application’s internal architecture.

The level of detail provided by the two types of tools is different. SAST tools provide file and line where the vulnerability is located, but no URL, while DAST tools provide the external URL, but no details on the location of the problem within the code base of the application. Some teams use both tools to improve visibility, but this requires long and complex triaging to manage the vulnerabilities.

The Interactive AST Approach

The Interactive Application Security Testing (IAST) tools combine the static approach and the dynamic approach. They have access to the internal structure of the application, and to the way it behaves with actual traffic. This privileged point of view is ideal to conduct security analysis.

From an architecture point of view, the IAST tools become part of the infrastructure that hosts the web applications, because an IAST runs together with the application server. This approach is called instrumentation, and it is implemented by a component known as an agent. Other platforms such as Application Performance Monitoring tools (APMs) share this proven approach.

Once the agent has been installed, it incorporates automatic security sensors in the critical execution points of the application. These sensors monitor the dataflow between requests and responses, the external components that the application includes, and data operations such as database access. This broad-spectrum coverage is much better than the visibility that SAST and DAST rely on.

In terms of specific results, we can look at two important metrics – how many types of vulnerabilities the tool finds, and how many of the identified vulnerabilities are false positives. Well, the best DAST is able to find only 18% of the existing vulnerabilities on a test application. And even worse, around 50% of the vulnerabilities reported by the best SAST static analysis tool are not true problems!

IT Automation with AI

Source: Hdiv Security via OWASP Benchmark public result data

The IAST approach provides these tangible benefits:

  1. Complete coverage, because the entire application is reviewed, both the custom code and the external code, such as open-source components and legacy dependencies.
  2. Flexibility, because it can be used in all environments; development, quality assurance (QA), and production.
  3. High accuracy, because the combination of static and dynamic point of views allow us to find more vulnerabilities with no false positives.
  4. Complete vulnerability information, including the static aspects (source code details) and dynamic aspects (execution details).
  5. Reduction of the duration of the security verification phase, so that the time-to-market of the secure applications is shorter.
  6. Compatible with agile development methodologies, such as DevSecOps, because it can be easily automated, and reduces the manual verification activities

IAST tool can add tons of value to the security tooling of any organization concerned with the security of the software.

In the same way that everyone uses an automated spell checker to find typos in a document, we believe that any team would benefit from an automated validation of the security of an application.

However, the AST does not represent a security utopia, since they can only detect security problems that follow a common pattern.

About the Author –

Roberto Velasco is the CEO of Hdiv Security. He has been involved with the IT and security industry for the past 16 years and is experienced in software development, software architecture and application security across different sectors such as banking, government and energy. Prior to founding Hdiv Security, Roberto worked for 8 years as a software architect and co-founded ARIMA, a company specialized in software architecture. He regularly speaks at Software Architecture and cybersecurity conferences such as Spring I/O and APWG.eu.

Post – Pandemic Recruiting Practices

Prabhakar Kumar Mandal

The COVID pandemic has transformed business as we know it. This includes recruitment. Right from the pre-hire activities to the post-hire ones, no hiring practices will be exempt from change we’re witnessing. To maintain a feasible talent acquisition program now and in the coming years, organizations face a persistent need to reimagine the way they do things at every step of the hiring funnel. 

Enterprise IT Support Services USA

In my perspicacity, following are the key aspects to look at:

1. Transforming Physical Workspaces

Having employees be physically present at workplace is fraught with challenges now. We envision many companies transitioning into a fully or partially remote workforce to save on costs and give employees more flexibility.

This means companies that maintain a physical headquarter will be paying much closer attention to the purpose those spaces really serve—and so will the candidates. The emphasis now will be on spaces of necessity—meeting areas, spaces for collaborative work, and comfortable, individual spaces for essential workers who need to be onsite. 

2. Traveling for interviews will be an obsolete

It’s going to be a while before non-essential travel assumes its pre-corona importance. In a study of traveler attitudes spanning the U.S., Canada, the U.K., and Australia, the portion of people who said they intended to restrict their travel over the next year increased from 24% in the first half of March to 40% in the second half of March.

Candidates will be less willing than they once were to jump on a plane for an in-person interview when a video conference is a viable alternative. 

3. Demand for workers with cross-trained skills will increase

Skills-based hiring has been on the rise now and will keep increasing as businesses strive to do more with a lesser headcount. We anticipate organizations to increasingly seek out candidates who can wear multiple hats. 

Additionally, as machines take on more jobs that were once reserved for people, we will see even greater demand for uniquely human skills like problem solving and creative thinking. Ravi Kumar, president of Infosys Ltd., summed it up perfectly in an interview with Forbes: “machines will handle problem-solving and humans will focus on problem finding.” 

4. Recruiting events will look a lot different 

It’s unclear when large-scale, in-person gatherings like job fairs will be able to resume, but it will likely be a while. We will likely see most events move to a virtual model, which will not only reduce risk but significantly cut costs for those involved. This may open new opportunities to allocate that budget to improve some of the other pertinent recruiting practices on this list. 

Digital Transformation Services and Solutions

5. Time to hire may change dramatically

The current approach is likely to change. For example, that most people who took a new job last year were not searching for one: Somebody came and got them. Businesses seek to fill their recruiting funnel with as many candidates as possible, especially ‘passive candidates’, who are not looking to move. Frequently employers advertise jobs that do not exist, hoping to find people who might be useful later or in a different framework. We are always campaigning the importance of minding our recruiting metrics, which can help us not only to hire more competently but identify interruptions in our recruiting process.

Are there steps in the hiring process, like screening or onboarding, that can be accelerated to balance things out? Are there certain recruitment channels that typically yield faster hires than others that can be prioritized? These are important questions to ask as you analyze the pandemic’s impacts to your hiring funnel. 

6. How AI can be leveraged to screen candidates?

AI is helping candidates get matched with the right companies. There are over 100 parameters to assess the candidates. This reduces wastage of time, money, and resources. The candidates are marked on their core strengths. This helps the recruitment manager to place them in the apt role.

The current situation presents the perfect opportunity for companies to adopt new tools. Organizations can reassess their recruitment processes and strategies through HR-aligned technology.

Post-pandemic hiring strategy

This pertains more to the industries most impacted by the pandemic, like businesses in the hospitality sector, outdoor dining, and travel to name a few. Many of the applicants in this domain have chosen to make the shift towards more promising or booming businesses.

However, once the pandemic blows over and restrictions are lifted, you can expect suffering sectors to come back with major recruitment changes and fierce competition over top talent.

Companies that take this time to act by cultivating relationships and connections with promising talent in their sphere, will have the advantage of gathering valuable data from probable candidates.

About the Author –

Prabhakar is a recruiter by profession and cricketer by passion. His focus is on hiring for the infra verticle. He hails from a small town in Bihar was brought up in Pondicherry. Prabhakar has represented Pondicherry in U-19 cricket (National School Games). In his free time he enjoys reading, working on his health and fitness and spending time with his family and friends.

Reduce Test Times and Increase Coverage with AI & ML

Kevin Surace

Chairman & CTO, Appvance.ai

With the need for frequent builds—often many times in a day—QEs can only keep pace through AI-led testing. It is the modern approach that allows quality engineers to create scripts and run tests autonomously to find bugs and provide diagnostic data to get to the root cause.

AI-driven testing means different things to different QA engineers. Some see it as using AI for identifying objects or helping create script-less testing; some consider it as autonomous generation of scripts while others would think in terms of leveraging system data to create scripts which mimic real user activity.

Our research shows that teams who are able to implement what they can in scripts and manual testing have, on average, less than 15% code, page, action, and likely user flow coverage. In essence, even if you have 100% code coverage, you are likely testing less than 15% of what users will do. That in itself is a serious issue.

Starting in 2012, Appvance set out to rethink the concept of QA automation. Today our AIQ Technology combines tens of thousands of hours of test automation machine learning with the deep domain knowledge, the essential business rules, each QE specialist knows about their application. We create an autonomous expert system that spawns multiple instances of itself that swarm over the application testing at the UX and at the API-levels. Along the way these Intelligences write the scripts, hundreds, and thousands of them, that describes their individual journeys through the application.

And why would we need to generate so many tests fully autonomously. Because applications today are 10X the size they were just ten years ago. But your QE team doesn’t have 10X the number of test automation engineers. And because you have 10X less time to do the work than 10 years ago. Just to keep pace with the dev team requires each quality engineer to be 100X more productive than they were 10 years ago.

Something had to change; that something is AI.

AI-testing in two steps

We leveraged AI and witnessed over 90% reduction in human effort to find the same bugs. So how does this work?

It’s really a two-stage process.

First, leveraging key AI capabilities in TestDesigner, Appvance’s codeless test creation system, we make it possible to write scripts faster, identify more resilient accessors, and substantially reduce maintenance of scripts.

With AI alongside you as you implement an automated test case, you get a technology that suggests the most stable accessors and constantly improves and refines them. It also creates “fallback accessors” when tests run and hit an accessor change enabling the script to continue even though changes have been made to the application. And finally, the AI can self-heal scripts which must and update them with new accessors without human assistance. These AI-based, built-in technologies give you the most stable scripts every time with the most robust accessor methodologies and self-healing. Nothing else comes close.

The final two points above deal with autonomous generation of tests. To beat the queue and crush it, you have to get a heavy lift for finding bugs. And as we have learnt, go far beyond the use cases that a business analyst listed. Job one is to find bugs and prioritize them, leveraging AI to generate tests autonomously.

Appvance’s patented AI engine has already been trained with millions of actions. You will teach it the business rules of your application (machine learning). It will then create real user flows, take every possible action, discover every page, fill out every form, get to every state, and validate the most critical outcomes just as you trained it to do. It does all this without writing or recording a single script. We call this is ‘blueprinting’ an application. We do this at every new build. Multiple instances of the AI will spin up, each selecting a unique path through the application, typically finding 1000s or more flows in a matter of minutes. When complete, the AI hands you the results including bugs, all the diagnostic data to help find the root cause, and the reusable test-scripts to repeat the bug. A further turn of the crank can refine these scripts into exact replicas of what production users are doing and apply them to the new build. Any modern approach to continuous testing needs to leverage AI in both helping QA engineers create scripts as well as autonomously create tests so that both parts work together to find bugs and provide data to get to the root cause. That AI driven future is available today from Appvance.

About the Author –

Kevin Surace is a highly lauded entrepreneur and innovator. He’s been awarded 93 worldwide patents, and was Inc. Magazine Entrepreneur of the Year, CNBC Innovator of the Decade, a Davos World Economic Forum Tech Pioneer, and inducted into the RIT Innovation Hall of Fame. Kevin has held leadership roles with Serious Energy, Perfect Commerce, CommerceNet and General Magic and is credited with pioneering work on AI virtual assistants, smartphones, QuietRock and the Empire State Building windows energy retrofit.

Business Intelligence Platform RESTful Web Service

Albert Alan

Restful API

RESTful Web Services are REST architecture based web services. Representational State Transfer (REST) is a style of software architecture for distributed systems such as the World Wide Web. In this architectural style, data and functionality is considered resources and are accessed using Uniform Resource Identifiers (URIs), typically links on the Web.

RESTful Web Service

REST has some advantages over SOAP (Simple Objects Access Protocol) but is similar in technology since it is also a function call via HTTP protocol. REST is easier to call from various platforms, transfers pure human-readable data in JSON or XML and is faster and saves resources.

In the basic idea of REST, an object is accessed via REST, not its methods. The state of the object can be changed by the REST access. The change is caused by the passed parameters. A frequent application is the connection of the SAP PI via the REST interface.

When to use Rest Services

  • You want to access BI platform repository objects or perform basic scheduling.
  • You want to use a programming language that is not supported by another BI platform SDK.
  • You want to extract all the query details and number of records per query for all the reports like Webi and Crystal, etc.
  • You want to extract folder path of all reports at once.

Process Flow

RESTful Web Service

RESTful Web Service Requests

To make a RESTful web service request, you need the following:

  • URL – The URL that hosts the RESTful web service.
  • Method – The type of HTTP method to use for sending the request, for example GET, PUT, POST, or DELETE.
  • Request header – The attributes that describe the request.
  • Request body – Additional information that is used to process the request.

Common RWS Error Messages

RESTful Web Service

Restful Web Service URIs Summary List

URLResponseComments
  /v1Service document that contains a link to the /infostore API.This is the root level of an infostore resource
  /v1/infostoreFeed contains all the objects in BOE system/v1/infostore
  /v1/infostore/ <object_id>Entry corresponding to the info object with SI_ID=./v1/infostore/99
      /v1/logon/longReturns the long form for logon, which contains the user and password authentication template.Used to logon to the BI system based on the authentication method.
  /v1/users/ <user_id>  XML feed of user details in BOE systemYou can Modify user using PUT method and DELETE user using DELETE method.
    /v1/usergroups/ <usergroup_id>    XML feed of user group details in BOE systemSupport GET and PUT and DELETE method. You can Modify user group using PUT method and DELETE user group using DELETE method.
  v1/folders/ <folder_id>XML feed displays the details of the folder, can be used to modify the details of the folder, and delete the folder.You modify the folder using PUT method and DELETE the folder using DELETE method
  /v1/publicationsXML feed of all publications created in BOE systemThis API supports GET method only.

Extended Workflow

 The workflow is as follows:

  • To Pass the Base URL

GET http:///localhost:6405/biprws/v1/users

  • To Pass the Headers

  • To Get the xml/json response

Automation of Rest Call

The Business Intelligence platform RESTful Web Service  (BI-REST-SDK) allows you to programmatically access the BI platform functionalities such as administration, security configuration and modification of the repository. In addition, to the Business Intelligence platform RESTful web service SDK, you can also use the SAP Crystal Reports RESTful Web Services  (CR REST SDK) and SAP Web Intelligence RESTful Web Services (WEBI REST SDK).

Implementation

An application has been designed and implemented using Java to automate the extraction of SQL query for all the webi reports from the server at once.

Tools used:

  • Postman (Third party application)
  • Eclipse IDE

The structure of the application is as below:

The application file comprises of the required java jar files, java class files, java properties files and logs. Java class files (SqlExtract) are the source code and will be compiled and executed using command prompt as:

Step 1

  • Javac -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command compiles the java code.

Step 2

  • Java -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command runs the compiled java file.

The java properties file (log4j) is used to set the configurations for the java code to run. Also, the path for the log file can be set in the properties file.

RESTful Web Service

The logs (SqlExtractLogger) consist of the required output file with all the extracted query for the webi reports along with the data source name, type and the row count for each query in the respective folder in the path set by the user in properties file.

RESTful Web Service

The application is standalone and can run in any windows platform or server which has java JRE (version greater than 1.6 – preferred) installed in it.

Note: All the above steps required to execute the application are consolidated in the (steps) file.

Conclusion

SAP BO provides Restful web service to traverse through its repository, to fetch structural info and to modify the metadata structure based on the user requirements. When integrated with programming languages like python, java, etc., extends the scope to a greater extent, allowing the user to automate the workflows and to solve the backtracking problems.

Handling Restful web service needs expertise in server administration and programming as changes made to the metadata are irreversible.

References

About the Author –

Alan is a SAP Business Intelligence consultant with a critical thinking and an analytical mind. He believes in ‘The more extensive a man’s knowledge of what has been done, the greater will be his power of knowing what to do’.

Mentoring – a Win-Win Situation

Rama Vani Periasamy

“If I have seen further it is by standing on the shoulders of giants.” — Isaac Newton

Did you know the English word ‘Mentor’ actually originated from the Greek epic ‘The Odyssey’?

When Odysseus had to leave his kingdom to lead his army in the Trojan war, his son Telemachus was left under the guidance of a friend ‘Mentor’. Mentor was supposed to guide and groom Telemachus during his developmental years and make him independent. The word ‘Mentor’ was thus incorporated in the English language. We use the word in the same context that existed in Greek Mythology – to guide a person, make him/her an independent thinker, and a doer.

In the age of technology, there may be tools and enormous amounts of data to get a competitive advantage, but they’re no match for a mentor. The business hall of fame is adorned with the names of people who discovered that finding a mentor made all the difference.

A lot of people have been able to achieve greater heights than they imagined because they were able to tap into their potential and that is the energy mentoring brings in.

In today’s world, a lot of corporate offices offer mentoring programs that cut across age groups (called the cross-gens), backgrounds, and experiences that benefit everyone. But sometimes the mechanisms and expectations of a mentoring program are not clear which makes the practice unsuccessful. Today’s young generation think they have the internet to quench the thirst of their knowledge. They do not see mentors as guiding beacons to success but only help them meet their learning needs. Citing it with an example, mentoring is equivalent to teaching a man to not just fish, but also share the experiences, tricks, and tips, so that he becomes an independent fisher.  More often, our current generation fails to understand that even geniuses like Aristotle and Bill Gates needed a mentor in their lives.

When mentoring is so powerful, why don’t we nurture the relationship? What stops us? Is time a factor? Not really. Any relationship needs some amount of time to be invested and so is the case with mentoring. Putting aside a few hours a month is an easily doable task, especially for something that is inspiring and energizing. Schedules can always be shuffled for priorities.

Now that we know that we have the time, why is it always hard to find a mentor? To begin with, how do you find a mentor? Well, it is not as difficult as we think. When you start looking for them, you will eventually find one. They are everywhere but may not necessarily be in your workplace.

We have the time, we have a mentor, so what are the guidelines in the mentoring relationship?

The guidelines can be extracted very much in the word ‘MENTOR’.

M=Mission: Any engagement works only if you have something to work on. Both the mentor and mentee must agree on the goals and share their mission statement. Creating a vision and a purpose for the mentoring relationship adds value to both sides and this keeps you going. Articulating the mission statement would be the first activity, to begin with in a mentor-mentee relationship.

 E=Engage: Agree on ways to engage that works with your personalities and schedules. Set ground rules on the modes of communications. Is that going to be a one-one conversation periodically or remote calls? Find out the level of flexibility. Is an impromptu meeting fine? Can Emails or text messages be sent? Decide on the communication medium and time.

 N=Network: Expanding your network with that of your mentor or mentee and cultivating productive relationships will be the key to success. While expanding your network will be productive, remember to tread carefully. Seek permissions, respect, and even ask for an introduction before you reach out to the other person’s contacts.

 T=Trust: Build and maintain trust with your mentoring partner by telling the truth, staying connected, and being dependable. And as the mentorship grows, clear communication and honesty will deepen the relationship. Building trust takes time so always keep the lines of communication open.

O=Opportunity: Create opportunities for your mentee or mentor to grow. Being in a mentor-mentee relationship is like a two-way lane, where you can come across opportunities from both sides, which may not be open for non-mentors/mentees. Bringing in such opportunities will only help the other person achieving his/her goal or the mission statement that was set at the beginning.

R=Review and Renew: Schedule a regular time to review progress and renew your mentoring partnership. This will help you keep your progress on track and it will also help you look for short goals to achieve. Reviewing is also going to help retrospect if a different strategy is to be laid out to achieve your goals.

Mentoring may sound irrelevant and unnecessary while we are surviving a pandemic and going through bouts of intense emotions. But I feel it is even more necessary during this most unusual situation we’re facing. Mentoring could be one of the ways to combat anxiety and depression caused by isolation and the inability to meet people face-to-face.

Mentoring can be done virtually through video calls, by setting up a time to track the progress of your goals and discuss challenges/accomplishments.  Mentoring also proves to be the place to ask difficult questions because it is a “No Judging” relationship and the absolute safe place to deal with work-related anxiety and fear. I still recall my early days as a campus graduate where I was assigned a ‘Buddy’, the go-to person. With them, I’d discussed a lot of my ‘what’, ‘why’ and ‘how’ questions of the work and the corporate world, which I had resisted opening up to my supervisors.

Mentoring takes time. Remember the first day you struggled to balance on your bicycle and may have fallen down hurting your knees? But once you learned to ride, you would have loved your time on the saddle. The same applies to mentoring. Investing the time and effort in mentoring will energize you even better than a few hours of Netflix or scrolling on Instagram. Let us create a culture that shares knowledge, guides & encourages nonstop, like how Socrates taught Plato, Plato taught Aristotle and Aristotle held the beacon for many. There is an adage that goes “when you are ready to become a teacher, the student appears”.

“A mentor is someone who allows you to see the hope inside yourself.” — Oprah Winfrey

The article is based on the book “One Minute Mentoring” by Ken Blanchard & Claire Diaz Ortiz.

About the Author –

Rama is that everyday woman you see who juggles between family and a 9 hours work life. She loves reading history, fiction, attempting half marathons, and traveling.
To break the monotony of life and to share her interest in books & travel, she blogs and curates at www.kindleandkompass.com