Why is AIOps an Industrial Benchmark for Organizations to Scale in this Economy?

Ashish Joseph

Business Environment Overview

In this pandemic economy, the topmost priorities for most companies are to make sure the operations costs and business processes are optimized and streamlined. Organizations must be more proactive than ever and identify gaps that need to be acted upon at the earliest.

The industry has been striving towards efficiency and effectivity in its operations day in and day out. As a reliability check to ensure operational standards, many organizations consider the following levers:

  1. High Application Availability & Reliability
  2. Optimized Performance Tuning & Monitoring
  3. Operational gains & Cost Optimization
  4. Generation of Actionable Insights for Efficiency
  5. Workforce Productivity Improvement

Organizations that have prioritized the above levers in their daily operations require dedicated teams to analyze different silos and implement solutions that provide the result. Running projects of this complexity affects the scalability and monitoring of these systems. This is where AIOps platforms come in to provide customized solutions for the growing needs of all organizations, regardless of the size.

Deep Dive into AIOps

Artificial Intelligence for IT Operations (AIOps) is a platform that provides multilayers of functionalities that leverage machine learning and analytics.  Gartner defines AIOps as a combination of big data and machine learning functionalities that empower IT functions, enabling scalability and robustness of its entire ecosystem.

These systems transform the existing landscape to analyze and correlate historical and real-time data to provide actionable intelligence in an automated fashion.

Data Center Migration Planning Tools

AIOps platforms are designed to handle large volumes of data. The tools offer various data collection methods, integration of multiple data sources, and generate visual analytical intelligence. These tools are centralized and flexible across directly and indirectly coupled IT operations for data insights.

The platform aims to bring an organization’s infrastructure monitoring, application performance monitoring, and IT systems management process under a single roof to enable big data analytics that give correlation and causality insights across all domains. These functionalities open different avenues for system engineers to proactively determine how to optimize application performance, quickly find the potential root causes, and design preventive steps to avoid issues from ever happening.

AIOps has transformed the culture of IT war rooms from reactive to proactive firefighting.

Industrial Inclination to Transformation

The pandemic economy has challenged the traditional way companies choose their transformational strategies. Machine learning-powered automations for creating an autonomous IT environment is no longer a luxury. The usage of mathematical and logical algorithms to derive solutions and forecasts for issues have a direct correlation with the overall customer experience. In this pandemic economy, customer attrition has a serious impact on the annual recurring revenue. Hence, organizations must reposition their strategies to be more customer-centric in everything they do. Thus, providing customers with the best-in-class service coupled with continuous availability and enhanced reliability has become an industry standard.

As reliability and scalability are crucial factors for any company’s growth, cloud technologies have seen a growing demand. This shift of demand for cloud premises for core businesses has made AIOps platforms more accessible and easier to integrate. With the handshake between analytics and automation, AIOps has become a transformative technology investment that any organization can make.

As organizations scale in size, so does the workforce and the complexity of the processes. The increase in size often burdens organizations with time-pressed teams having high pressure on delivery and reactive housekeeping strategies. An organization must be ready to meet the present and future demands with systems and processes that scale seamlessly. This why AIOps platforms serve as a multilayered functional solution that integrates the existing systems to manage and automate tasks with efficiency and effectivity. When scaling results in process complexity, AIOps platforms convert the complexity to effort savings and productivity enhancements.

Across the industry, many organizations have implemented AIOps platforms as transformative solutions to help them embrace their present and future demand. Various studies have been conducted by different research groups that have quantified the effort savings and productivity improvements.

The AIOps Organizational Vision

As the digital transformation race has been in full throttle during the pandemic, AIOps platforms have also evolved. The industry did venture upon traditional event correlation and operations analytical tools that helped organizations reduce incidents and the overall MTTR. AIOps has been relatively new in the market as Gartner had coined the phrase in 2016.  Today, AIOps has attracted a lot of attention from multiple industries to analyze its feasibility of implementation and the return of investment from the overall transformation. Google trends show a significant increase in user search results for AIOps during the last couple of years.

Data Center Consolidation Initiative Services

While taking a well-informed decision to include AIOps into the organization’s vision of growth, we must analyze the following:

  1. Understanding the feasibility and concerns for its future adoption
  2. Classification of business processes and use cases for AIOps intervention
  3. Quantification of operational gains from incident management using the functional AIOps tools

AIOps is truly visioned to provide tools that transform system engineers to reliability engineers to bring a system that trends towards zero incidents.

Because above all, Zero is the New Normal.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management. He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Patient Segmentation Using Data Mining Techniques

Srinivasan Sundararajan

Srinivasan Sundararajan

Patient Segmentation & Quality Patient Care

As the need for quality and cost-effective patient care increases, healthcare providers are increasingly focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. Simply put, data-driven healthcare is augmenting the human intelligence based on experience and knowledge.

Segmentation is the standard technique used in Retail, Banking, Manufacturing, and other industries that needs to understand their customers to provide better customer service. Customer segmentation defines the behavioral and descriptive profiles of customers. These profiles are then used to provide personalized marketing programs and strategies for each group.

In a way, patients are like customers to healthcare providers. Though the element of quality of care takes precedence than profit-making intention, a similar segmentation of patients will immensely benefit the healthcare providers, mainly for the following reasons:

  • Customizing the patient care based on their behavior profiles
  • Enabling a stronger patient engagement
  • Providing the backbone for data-driven decisions on patient profile
  • Performing advanced medical research like launching a new vaccine or trial

The benefits are obvious and individual hospitals may add more points to the above list; the rest of the article is about how to perform the patient segmentation using data mining techniques.

Data Mining for Patient Segmentation

In Data Mining a, segmentation or clustering algorithm will iterate over cases in a dataset to group them into clusters that contain similar characteristics. These groupings are useful for exploring data, identifying anomalies in the data, and creating predictions. Clustering is an unsupervised data mining (machine learning) technique used for grouping the data elements without advance knowledge of the group definitions.

K-means clustering is a well-known method of assigning cluster membership by minimizing the differences among items in a cluster while maximizing the distance between clusters. Clustering algorithm first identifies relationships in a dataset and generates a series of clusters based on those relationships. A scatter plot is a useful way to visually represent how the algorithm groups data, as shown in the following diagram. The scatter plot represents all the cases in the dataset, and each case is a point on the graph. The cluster points on the graph illustrate the relationships that the algorithm identifies.

AIOps Artificial Intelligence for IT Operations

One of the important parameters for a K-Means algorithm is the number of clusters or the cluster count. We need to set this to a value that is meaningful to the business problem that needs to be solved. However, there is good support in the algorithm to find the optimal number of clusters for a given data set, as explained next.

To determine the number of clusters for the algorithm to use, we can use a plot of the within cluster’s sum of squares, by the number of clusters extracted. The appropriate number of clusters to use is at the bend or ‘elbow’ of the plot. The Elbow Method is one of the most popular methods to determine this optimal value of k i.e. the number of clusters. The following code creates a curve.

AIOps Digital Transformation Solutions
AI Devops Automation Service Tools

In this example, based on the graph, it looks like k = 4 would be a good value to try.

Reference Patient Segmentation Using K-Means Algorithm in GAVS Rhodium Platform

In GAVS Rhodium Platform, which helps healthcare providers with Patient Data Management and Patient Data Sharing, there is a reference implementation of Patient Segmentation using K-Means algorithm. The following are the attributes that are used based on a publicly available Patient admit data (no personal information used in this data set). Again in the reference implementation sample attributes are used and in a real scenario consulting with healthcare practitioners will help to identify the correct attributes that is used for clustering.

 To prepare the data for clustering patients, patients must be separated along the following dimensions:

  • HbA1c: Measuring the glycated form of hemoglobin to obtain the three-month average of blood sugar.
  • Triglycerides: Triglycerides are the main constituents of natural fats and oils. This test indicates the amount of fat or lipid found in the blood.
  • FBG: Fasting Plasma Glucose test measures the amount of glucose levels present in the blood.
  • Systolic: Blood Pressure is the pressure of circulating blood against the walls of Blood Vessels. This test relates to the phase of the heartbeat when the heart muscle contracts and pumps blood from the chambers into the arteries.
  • Diastolic: The diastolic reading is the pressure in the arteries when the heart rests between beats.
  • Insulin: Insulin is a hormone that helps move blood sugar, known as glucose, from your bloodstream into your cells. This test measures the amount of insulin in your blood.
  • HDL-C: Cholesterol is a fat-like substance that the body uses as a building block to produce hormones. HDL-C or good cholesterol consists primarily of protein with a small amount of cholesterol. It is considered to be beneficial because it removes excess cholesterol from tissues and carries it to the liver for disposal. The test for HDL cholesterol measures the amount of HDL-C in blood.
  • LDL-C: LDL-C or bad cholesterol present in the blood as low-density lipoprotein, a relatively high proportion of which is associated with a higher risk of coronary heart disease. This test measures the LDL-C present in the blood.
  • Weight: This test indicates the heaviness of the patient.

The above tests are taken for the patients during the admission process.

The following is the code snippet behind the scenes which create the patient clustering.

Best AIOps Platforms Software

The below is the output cluster created from the above algorithm.

Just from this sample, healthcare providers can infer the patient behavior and patterns based on their creatinine and glucose levels, in real-life situations other different attributes can be used.

AI will play a major role in future healthcare data management and decision making and data mining algorithms like K-Means provide an option to segment the patients based on the attributes which will improve the quality of patient care.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoprability), while bringing more trust and transparency into the healthcare process using patient consent management, credentialing and zero knowledge proofs.

Container Security

Anandharaj V

We live in a world of innovation and are beneficiaries of new advancements. New advancements in software technology also comes with potential security vulnerabilities.

‘Containers’ are no exception. Let us first understand what a container is and then the vulnerabilities associated with it and how to mitigate them.

What is a Container?

You might have seen containers in the shipyard. It is used to isolate different cargos which is transported via ships. In the same way, software technologies use a containerization approach.

Containers are different from Virtual Machines (VM) where VMs need a guest operating system which runs on a host operating system (OS). Containers uses OS virtualization, in which required processes, CPU, Memory, and disk are virtualized so that containers can run without a separate operating system.

In containers, software and its dependencies are packaged so that it can run anywhere whether on-premises desktop or in the cloud.

IT Infrastructure Managed Services

Source: https://cloud.google.com/containers

As stated by Google, “From Gmail to YouTube to Search, everything at Google runs in containers”.

Container Vulnerabilities and Countermeasures

Containers Image Vulnerabilities

While creating a container, an image may be patched without any known vulnerabilities. But a vulnerability might have been discovered later, while the container image is no longer patched. For traditional systems, it can be patched when there is a fix for the vulnerability without making any changes but for containers, updates should be upstreamed in the images, and then redeployed. So, containers have vulnerabilities because of the older image version which is deployed.

Also, if the container image is misconfigured or unwanted services are running, it will lead to vulnerabilities.

Countermeasures

If you use traditional vulnerability assessment tools to assess containers, it will lead to false positives. You need to consider a tool that has been designed to assess containers so that you can get actionable and reliable results.

To avoid container image misconfiguration, you need to validate the image configuration before deploying.

Embedded Malware and Clear Text Secrets

Container images are collections of files packaged together. Hence, there are chances of malicious files getting added unintentionally or intentionally. That malicious software will have the same effect as of the traditional systems.

If secrets are embedded in clear text, it may lead to security risks if someone unauthorized gets access.

Countermeasures

Continuous monitoring of all images for embedded malware with signature and behavioral detection can mitigate embedded malware risks.

 Secrets should never be stored inside of containers image and when required, it should be provided dynamically at runtime.

Use of Untrusted Images

Containers have the advantages of ease of use and portability. This capability may lead teams to run container images from a third party without validating it and thus can introducing data leakage, malware, or components with known vulnerabilities.

Countermeasures

Your team should maintain and use only trusted images, to avoid the risk of untrusted or malicious components being deployed.

Registry Risks

Registry is nothing but a repository for storing container images.

  1. Insecure connections to registries

Images can have sensitive information. If connections to registries are performed over insecure channels, it can lead to man-in-the-middle attacks that could intercept network traffic to steal programmer or admin credentials to provide outdated or fraudulent images.

You should configure development tools and containers while running, to connect only over the encrypted medium to overcome the unsecured connection issue.

  1. Insufficient authentication and authorization restrictions

As we have already seen that registries store container images with sensitive information. Insufficient authentication and authorization will result in exposure of technical details of an app and loss of intellectual property. It also can lead to compromise of containers.

Access to registries should authenticated and only trusted entities should be able to add images and all write access should be periodically audited and read access should be logged. Proper authorization controls should be enabled to avoid the authentication and authorization related risks.

Orchestrator Risks

  1. Unbounded administrative access

There are many orchestrators designed with an assumption that all the users are administrators but, a single orchestrator may run different apps with different access levels. If you treat all users as administrators, it will affect the operation of containers managed by the orchestrator.

Orchestrators should be given the required access with proper role-based authorization to avoid the risk of unbounded administrative access.

  1. Poorly separated inter-container network traffic

In containers, traffic between the host is routed through virtual overlay networks. This is managed by the orchestrator. This traffic will not be visible to existing network security and management tools since network filters only see the encrypted packets traveling between the hosts and will lead to security blindness. It will be ineffective in monitoring the traffic.

To overcome this risk, orchestrators need to configure separate network traffic as per the sensitivity levels in the virtual networks.

  1. Orchestrator node trust

You need to give special attention while maintaining the trust between the hosts, especially the orchestrator node. Weakness in orchestrator configuration will lead to increased risk. For example, communication can be unencrypted and unauthenticated between the orchestrator, DevOps personnel, and administrators.

To mitigate this, orchestration should be configured securely for nodes and apps. If any node is compromised, it should be isolated and removed without disturbing other nodes.

Container Risks

  1. App vulnerabilities

It is always good to have a defense. Even after going through the recommendations, we have seen above; containers may still be compromised if the apps are vulnerable.

As we have already seen that traditional security tools may not be effective when you use it for containers. So, you need a container aware tool which will detect behavior and anomalies in the app at run time to find and mitigate it.

  1. Rogue containers

It is possible to have rogue containers. Developers may have launched them to test their code and left it there. It may lead to exploits as those containers might not have been thoroughly checked for security loopholes.

You can overcome this by a separate environment for development, test, production, and with a role-based access control.

Host OS Risks

  1. Large attack surface

Every operating system has its attack surface and the larger the attack surface, the easier it will be for the attacker to find it and exploit the vulnerability and compromise the host operating system and the container which run on it.

You can follow the NIST SP 800-123 guide to server security if you cannot use container specific operating system to minimize the attack surface.

  1. Shared kernel

If you only run containers on a host OS you will have a smaller attack surface than the normal host machine where you will need libraries and packages when you run a web server or a database and other software.

You should not mix containers and non-containers workload on the same host machine.

If you wish to further explore this topic, I suggest you read NIST.SP.800-190.


References

About the Author –

Anandharaj is a lead DevSecOps at GAVS and has over 13 years of experience in Cybersecurity across different verticals which include Network Security, application Security, computer forensics and cloud security.

IAST: A New Approach to Finding Security Vulnerabilities

Roberto Velasco
CEO, Hdiv Security

One of the most prevalent misconceptions about cybersecurity, especially in the mainstream media and also among our clients, is that to conduct a successful attack against an IT system it is necessary to ‘investigate’ and find a new defect in the target’s system.

However, for most security incidents involving internet applications, it is enough to simply exploit existing and known programming errors.

For instance, the dramatic Equifax breach could have been prevented by following basic software security best-practices, such as patching the system to prevent known vulnerabilities. That was, in fact, one of the main takeaways from the forensic investigation led by the US federal government.

One of the most important ways to reduce security risks is to ensure that all known programming errors are corrected before the system is exposed to internet traffic. Research bodies such as the US NIST found that correcting security bugs early on is orders of magnitude cheaper than doing so when the development has been completed.

When composing a text in a text editor, the spelling and grammar corrector highlights the mistakes in the text. Similarly, there are security tools known as AST (Application Security Testing) that find programming errors that introduce security weaknesses. ASTs report the file and line where the vulnerability is located, in the same way, that a text editor reports the page and the line that contains a typo.

In other words, these tools allow developers to build software that is largely free of security-related programming errors, resulting in more secure applications.

Just like it is almost impossible to catch all errors in a long piece of text, most software contains many serious security vulnerabilities. The fact that some teams do not use any automated help at all, makes these security weaknesses all the most prevalent and easy to exploit.

Let’s take a look at the different types of security issue detection tools also known as ASTs, or vulnerability assessment tools, available in the market.

The Traditional Approach

Two mature technologies capture most of the market: static code analysis (SAST) and web scanners (dynamic analysis or DAST). Each of these two families of tools is focused on a different execution environment.

The SAST static analysis, also known as white-box analysis because the tool has access to the source code of the application, scans the source code looking for known patterns that indicate insecure programming that could lead to a vulnerability.

The DAST dynamic analysis replicates the view of an attacker. At this point, the tool executes hundreds or thousands of queries against the application designed to replicate the activity of an attacker to find security vulnerabilities. This is a black-box analysis because the point of view is purely external, with no knowledge of the application’s internal architecture.

The level of detail provided by the two types of tools is different. SAST tools provide file and line where the vulnerability is located, but no URL, while DAST tools provide the external URL, but no details on the location of the problem within the code base of the application. Some teams use both tools to improve visibility, but this requires long and complex triaging to manage the vulnerabilities.

The Interactive AST Approach

The Interactive Application Security Testing (IAST) tools combine the static approach and the dynamic approach. They have access to the internal structure of the application, and to the way it behaves with actual traffic. This privileged point of view is ideal to conduct security analysis.

From an architecture point of view, the IAST tools become part of the infrastructure that hosts the web applications, because an IAST runs together with the application server. This approach is called instrumentation, and it is implemented by a component known as an agent. Other platforms such as Application Performance Monitoring tools (APMs) share this proven approach.

Once the agent has been installed, it incorporates automatic security sensors in the critical execution points of the application. These sensors monitor the dataflow between requests and responses, the external components that the application includes, and data operations such as database access. This broad-spectrum coverage is much better than the visibility that SAST and DAST rely on.

In terms of specific results, we can look at two important metrics – how many types of vulnerabilities the tool finds, and how many of the identified vulnerabilities are false positives. Well, the best DAST is able to find only 18% of the existing vulnerabilities on a test application. And even worse, around 50% of the vulnerabilities reported by the best SAST static analysis tool are not true problems!

IT Automation with AI

Source: Hdiv Security via OWASP Benchmark public result data

The IAST approach provides these tangible benefits:

  1. Complete coverage, because the entire application is reviewed, both the custom code and the external code, such as open-source components and legacy dependencies.
  2. Flexibility, because it can be used in all environments; development, quality assurance (QA), and production.
  3. High accuracy, because the combination of static and dynamic point of views allow us to find more vulnerabilities with no false positives.
  4. Complete vulnerability information, including the static aspects (source code details) and dynamic aspects (execution details).
  5. Reduction of the duration of the security verification phase, so that the time-to-market of the secure applications is shorter.
  6. Compatible with agile development methodologies, such as DevSecOps, because it can be easily automated, and reduces the manual verification activities

IAST tool can add tons of value to the security tooling of any organization concerned with the security of the software.

In the same way that everyone uses an automated spell checker to find typos in a document, we believe that any team would benefit from an automated validation of the security of an application.

However, the AST does not represent a security utopia, since they can only detect security problems that follow a common pattern.

About the Author –

Roberto Velasco is the CEO of Hdiv Security. He has been involved with the IT and security industry for the past 16 years and is experienced in software development, software architecture and application security across different sectors such as banking, government and energy. Prior to founding Hdiv Security, Roberto worked for 8 years as a software architect and co-founded ARIMA, a company specialized in software architecture. He regularly speaks at Software Architecture and cybersecurity conferences such as Spring I/O and APWG.eu.

Reduce Test Times and Increase Coverage with AI & ML

Kevin Surace

Chairman & CTO, Appvance.ai

With the need for frequent builds—often many times in a day—QEs can only keep pace through AI-led testing. It is the modern approach that allows quality engineers to create scripts and run tests autonomously to find bugs and provide diagnostic data to get to the root cause.

AI-driven testing means different things to different QA engineers. Some see it as using AI for identifying objects or helping create script-less testing; some consider it as autonomous generation of scripts while others would think in terms of leveraging system data to create scripts which mimic real user activity.

Our research shows that teams who are able to implement what they can in scripts and manual testing have, on average, less than 15% code, page, action, and likely user flow coverage. In essence, even if you have 100% code coverage, you are likely testing less than 15% of what users will do. That in itself is a serious issue.

Starting in 2012, Appvance set out to rethink the concept of QA automation. Today our AIQ Technology combines tens of thousands of hours of test automation machine learning with the deep domain knowledge, the essential business rules, each QE specialist knows about their application. We create an autonomous expert system that spawns multiple instances of itself that swarm over the application testing at the UX and at the API-levels. Along the way these Intelligences write the scripts, hundreds, and thousands of them, that describes their individual journeys through the application.

And why would we need to generate so many tests fully autonomously. Because applications today are 10X the size they were just ten years ago. But your QE team doesn’t have 10X the number of test automation engineers. And because you have 10X less time to do the work than 10 years ago. Just to keep pace with the dev team requires each quality engineer to be 100X more productive than they were 10 years ago.

Something had to change; that something is AI.

AI-testing in two steps

We leveraged AI and witnessed over 90% reduction in human effort to find the same bugs. So how does this work?

It’s really a two-stage process.

First, leveraging key AI capabilities in TestDesigner, Appvance’s codeless test creation system, we make it possible to write scripts faster, identify more resilient accessors, and substantially reduce maintenance of scripts.

With AI alongside you as you implement an automated test case, you get a technology that suggests the most stable accessors and constantly improves and refines them. It also creates “fallback accessors” when tests run and hit an accessor change enabling the script to continue even though changes have been made to the application. And finally, the AI can self-heal scripts which must and update them with new accessors without human assistance. These AI-based, built-in technologies give you the most stable scripts every time with the most robust accessor methodologies and self-healing. Nothing else comes close.

The final two points above deal with autonomous generation of tests. To beat the queue and crush it, you have to get a heavy lift for finding bugs. And as we have learnt, go far beyond the use cases that a business analyst listed. Job one is to find bugs and prioritize them, leveraging AI to generate tests autonomously.

Appvance’s patented AI engine has already been trained with millions of actions. You will teach it the business rules of your application (machine learning). It will then create real user flows, take every possible action, discover every page, fill out every form, get to every state, and validate the most critical outcomes just as you trained it to do. It does all this without writing or recording a single script. We call this is ‘blueprinting’ an application. We do this at every new build. Multiple instances of the AI will spin up, each selecting a unique path through the application, typically finding 1000s or more flows in a matter of minutes. When complete, the AI hands you the results including bugs, all the diagnostic data to help find the root cause, and the reusable test-scripts to repeat the bug. A further turn of the crank can refine these scripts into exact replicas of what production users are doing and apply them to the new build. Any modern approach to continuous testing needs to leverage AI in both helping QA engineers create scripts as well as autonomously create tests so that both parts work together to find bugs and provide data to get to the root cause. That AI driven future is available today from Appvance.

About the Author –

Kevin Surace is a highly lauded entrepreneur and innovator. He’s been awarded 93 worldwide patents, and was Inc. Magazine Entrepreneur of the Year, CNBC Innovator of the Decade, a Davos World Economic Forum Tech Pioneer, and inducted into the RIT Innovation Hall of Fame. Kevin has held leadership roles with Serious Energy, Perfect Commerce, CommerceNet and General Magic and is credited with pioneering work on AI virtual assistants, smartphones, QuietRock and the Empire State Building windows energy retrofit.

Mentoring – a Win-Win Situation

Rama Vani Periasamy

“If I have seen further it is by standing on the shoulders of giants.” — Isaac Newton

Did you know the English word ‘Mentor’ actually originated from the Greek epic ‘The Odyssey’?

When Odysseus had to leave his kingdom to lead his army in the Trojan war, his son Telemachus was left under the guidance of a friend ‘Mentor’. Mentor was supposed to guide and groom Telemachus during his developmental years and make him independent. The word ‘Mentor’ was thus incorporated in the English language. We use the word in the same context that existed in Greek Mythology – to guide a person, make him/her an independent thinker, and a doer.

In the age of technology, there may be tools and enormous amounts of data to get a competitive advantage, but they’re no match for a mentor. The business hall of fame is adorned with the names of people who discovered that finding a mentor made all the difference.

A lot of people have been able to achieve greater heights than they imagined because they were able to tap into their potential and that is the energy mentoring brings in.

In today’s world, a lot of corporate offices offer mentoring programs that cut across age groups (called the cross-gens), backgrounds, and experiences that benefit everyone. But sometimes the mechanisms and expectations of a mentoring program are not clear which makes the practice unsuccessful. Today’s young generation think they have the internet to quench the thirst of their knowledge. They do not see mentors as guiding beacons to success but only help them meet their learning needs. Citing it with an example, mentoring is equivalent to teaching a man to not just fish, but also share the experiences, tricks, and tips, so that he becomes an independent fisher.  More often, our current generation fails to understand that even geniuses like Aristotle and Bill Gates needed a mentor in their lives.

When mentoring is so powerful, why don’t we nurture the relationship? What stops us? Is time a factor? Not really. Any relationship needs some amount of time to be invested and so is the case with mentoring. Putting aside a few hours a month is an easily doable task, especially for something that is inspiring and energizing. Schedules can always be shuffled for priorities.

Now that we know that we have the time, why is it always hard to find a mentor? To begin with, how do you find a mentor? Well, it is not as difficult as we think. When you start looking for them, you will eventually find one. They are everywhere but may not necessarily be in your workplace.

We have the time, we have a mentor, so what are the guidelines in the mentoring relationship?

The guidelines can be extracted very much in the word ‘MENTOR’.

M=Mission: Any engagement works only if you have something to work on. Both the mentor and mentee must agree on the goals and share their mission statement. Creating a vision and a purpose for the mentoring relationship adds value to both sides and this keeps you going. Articulating the mission statement would be the first activity, to begin with in a mentor-mentee relationship.

 E=Engage: Agree on ways to engage that works with your personalities and schedules. Set ground rules on the modes of communications. Is that going to be a one-one conversation periodically or remote calls? Find out the level of flexibility. Is an impromptu meeting fine? Can Emails or text messages be sent? Decide on the communication medium and time.

 N=Network: Expanding your network with that of your mentor or mentee and cultivating productive relationships will be the key to success. While expanding your network will be productive, remember to tread carefully. Seek permissions, respect, and even ask for an introduction before you reach out to the other person’s contacts.

 T=Trust: Build and maintain trust with your mentoring partner by telling the truth, staying connected, and being dependable. And as the mentorship grows, clear communication and honesty will deepen the relationship. Building trust takes time so always keep the lines of communication open.

O=Opportunity: Create opportunities for your mentee or mentor to grow. Being in a mentor-mentee relationship is like a two-way lane, where you can come across opportunities from both sides, which may not be open for non-mentors/mentees. Bringing in such opportunities will only help the other person achieving his/her goal or the mission statement that was set at the beginning.

R=Review and Renew: Schedule a regular time to review progress and renew your mentoring partnership. This will help you keep your progress on track and it will also help you look for short goals to achieve. Reviewing is also going to help retrospect if a different strategy is to be laid out to achieve your goals.

Mentoring may sound irrelevant and unnecessary while we are surviving a pandemic and going through bouts of intense emotions. But I feel it is even more necessary during this most unusual situation we’re facing. Mentoring could be one of the ways to combat anxiety and depression caused by isolation and the inability to meet people face-to-face.

Mentoring can be done virtually through video calls, by setting up a time to track the progress of your goals and discuss challenges/accomplishments.  Mentoring also proves to be the place to ask difficult questions because it is a “No Judging” relationship and the absolute safe place to deal with work-related anxiety and fear. I still recall my early days as a campus graduate where I was assigned a ‘Buddy’, the go-to person. With them, I’d discussed a lot of my ‘what’, ‘why’ and ‘how’ questions of the work and the corporate world, which I had resisted opening up to my supervisors.

Mentoring takes time. Remember the first day you struggled to balance on your bicycle and may have fallen down hurting your knees? But once you learned to ride, you would have loved your time on the saddle. The same applies to mentoring. Investing the time and effort in mentoring will energize you even better than a few hours of Netflix or scrolling on Instagram. Let us create a culture that shares knowledge, guides & encourages nonstop, like how Socrates taught Plato, Plato taught Aristotle and Aristotle held the beacon for many. There is an adage that goes “when you are ready to become a teacher, the student appears”.

“A mentor is someone who allows you to see the hope inside yourself.” — Oprah Winfrey

The article is based on the book “One Minute Mentoring” by Ken Blanchard & Claire Diaz Ortiz.

About the Author –

Rama is that everyday woman you see who juggles between family and a 9 hours work life. She loves reading history, fiction, attempting half marathons, and traveling.
To break the monotony of life and to share her interest in books & travel, she blogs and curates at www.kindleandkompass.com

Significance of CI CD Process in DevOps

Muraleedharan Vijayakumar

Developing and releasing software can be a complicated process, especially as applications, teams, and deployment infrastructure grow in complexity themselves. Often, challenges become more pronounced as projects grow. To develop, test, and release software quickly and consistently, developers and organizations have created distinct strategies to manage and automate these processes.

Did you know?  Amazon releases a new production code once every 11.6 seconds.

Why CI/CD/CD?

The era of digital transformations demands faster deployments into production. Faster deployments do not warrant defective releases, the solution – ‘DevOps’. The development team, operations team, and IT services team have to work in tandem and the magic circle that brings all of them together is DevOps.

To adopt a DevOps culture, implementing the right DevOps tools with the right DevOps process is essential. Continuous integration/continuous delivery/continuous deployment (CI/CD/CD) help us developers and testers ship the software faster and safer in a structured environment.

The biggest obstacle that needs to be overcome in constructing a DevOps environment is scalability. There are no definite measures on the scalability of an application or product development, but DevOps environment should be ready to scale to meet business and technology needs. It lays a strong foundation for building an agile DevOps for the business.

Continuous Integration and Deployment has seen many benefits in the software delivery process. Initiating automated code builds once checks are completed, running automated test suites, flagging errors and breaking builds if not adhered to compliance have eased the way of deploying a stable release into staging or production environment and eliminating manual errors and human bias.

How is CI/CD/CD Set Up?

Version control tools play an important role in the success of our DevOps pipeline. And designing a good source stage is pivotal to our CI/CD success. It ensures that we can version code, digital assets, and binary files (and more) all in one spot. This enables teams to communicate and collaborate better — and deploy faster.

Our code branching strategy determines how and when developers branch and merge. When deciding on a strategy it is important to evaluate what makes sense for our team and product. Most version control systems will let you adopt and customize standard strategies like mainline, trunk-based, task/feature branching, etc.,

Typical Branching Model Followed

A basic workflow starts with code being checked out. When the work in the branch is committed, CI processes are triggered. This can be done with a merge or pull request. Then the CI/CD pipeline kicks into high gear.

The goal of CI/CD is to continuously integrate changes to find errors earlier in the process, as known as ‘Shift Left’.  The ultimate goal of having an automated CI/CD process in place to identify errors or flag non-compliance at an early stage of the development process. This increases the project’s velocity by avoiding late-stage defects and delays. It creates an environment where code is always ready for a release. With the right branching strategy, teams are equipped to deliver success.

Continuous Integration: Integrating newly developed code with the central repository is continuous integration. Automated CI results in automated builds that are triggered to merge the newly developed codes into the repository. As part of this process, plugins can be added to perform static code analysis, security compliance checks, etc., to identify if the newly added code would have any impact on the application. If there are compliance issues, the automated build breaks, and the same is reflected to the developer with insights. Automated CI helps in increasing the productivity of the developers and the team.

Continuous Delivery: At the end of a successful CI, Continuous Delivery is triggered. CD ensures to automate the software delivery process and commits to deliver the integrated code into the production stage without any bugs or delays. CD helps in merging the newly developed code into the main branch of the software so that a ready to production product is available with all the checks in place.CD also checks the quality of the code and performs tests to check whether it can release the functional build to the production environment.

Continuous Deployment: The final and most critical part of DevOps is Continuous Deployment. After the successful merging of certified code, the pipelines are triggered to deploy the code into the production environment. These pipelines are also triggered automatically. The pipelines are constructed to handle the target environment be it jar or container deployments. The most important aspect of this pipeline is to tag the releases that are also done in the production environment. If there are rollbacks these tags help the team to roll back to the right version of the build.

CI/CD/CD is an art that needs to be crafted in the right and most efficient way that will help the software development team achieve their success at a faster pace.

Different Stages & Complete DevOps Setup

What is the CI/CD/CD  Outcome?

Cyber Security Mdr Services

About the Author –

Murleedharan is a senior technical manager and has managed, developed, and launched cutting edge business intelligence and analytics platforms using big data technologies. He has experience in hosting the platform in Microsoft Azure by leveraging the MS PaaS. He is a product manager for zDesk – A Virtual Desktop offering from GAVS.
His passion is to get a friction-less DevOps operational in an environment to bring down the deployment time to a few seconds.

Center of Excellence – Security

The Security Center of Excellence was instituted to set standards in the practice and be the point of contact for technical solutions, problem solving, etc. The broad objectives of this CoE are as follows:

  • Develop and maintain technical assets that can be leveraged across GAVS.
  • Enable Quality Governance by providing support in gating of architecture and design related deliverables.
  • Enable Operational Governance by establishing cadence for tech review of projects.
  • Create domain-based SMEs within the practice.
  • Train and upskill members in the practice.
  • Improve customer satisfactory index by implementing new ideas and innovations across all engagements.
  • Create additional SOC services for market competency.
  • Automation – Detect, investigate and remediate cyberthreats with playbooks and response workflows.

COVID and the changing nature of threat landscape

For many industries, it has been challenging period ever since the COVID outbreak, more so for those in security. Clearly, the bad actors have lot of time at their disposal which is reflective in the innovative techniques being used to attack targets. The level of vigilance required in monitoring the alerts and application of threat hunting techniques is key to diagnosing problems at initial stages of compromise in the worst-case scenario.

Microsoft Cloud Solution Provider

Remote Infrastructure Monitoring Services
Source: IBM X-Force Research

For enterprises that have no clue about MDR (Managed Detection and Response), this is a good time for them to start. We have innovative, cost effective solutions – “Make Hay while the Sun shines”. Small and large corporations alike have lost business and money because of lapse in security controls and monitoring. Now is not the time to make headlines that you are the victim of a major breach.

Our team is developing a vulnerability alerting tool, which we intend to equip customers with to provide qualified bulletin alerts, i.e. alerts only on vulnerabilities that affect them. This is a first of a kind in the market. This will greatly benefit existing and new customers.

Expanding into IAM and PAM

Security practice is expanding into Identity & Access Management (IAM) and Privileged Access Management (PAM) services. With new customers being onboarded into this focus areas for products such as Sailpoint, Thycotic, Ping, Cyberark, Okta and Azure PIM, we are expanding our talent pool through recruitment and through training and certification. This should largely benefit our existing customers and prospects who intend to leverage our security practice to fulfil their cyber security needs.

Expansion of our Red Team

Our Red Team within the practice has been expanded with many talented members, including some with bug bounty bragging rights. This has enormously helped in performing intensive tests on our internal product platforms, security assessments for customers. We have also extensively invested on tools for the Red Team to help them reduce assessment times.

Certification drive

With some more analysts having certified across AZ-500, Cyberark and trained on Darktrace. GAVS’ security analysts are taking full advantage to increase their knowledge thanks to the generosity of our alliances and training sites like Pluralsight. Even the mighty Microsoft opened their learning website for free, enabling young talent to equip themselves with critical DevOps and Cloud security skills.

As part of CoE initiatives, we have;

  • Aligned our security roadmap based on industry trends and to ensure solutions tailored for customer pain points.
  • Extended our SOC practice with IAM and PAM in 2020.
  • Identified domain-based SME and product-based SME for quick support.

We are currently in the process of creating security products, GVAS and GSMA, to help customer in proactively identifying and addressing vulnerabilities and self-maturity assessment of their cybersecurity posture. We are also underway to add Operational security to our Security practice.

If you have any questions about the CoE, you may reach out to them at COE_INFOSEC@gavstech.com

CoE Team Members

  • Venkatakrishnan A
  • Shivaram J
  • Alex Nepolian Lawrence
  • Ravindran Girikrishnan
  • Aravindah Sadhasivam Subramanian
  • Vijayakumar Veerapandiyan
  • Thubati Uday
  • Ganta Venkata Sandeep
  • Sundaramoorthy S
  • Sukanya Srinivasan

The Pandemic and Social Media

Prabhakar Mandal

The COVID-19 outbreak has established the importance of digital readiness during pandemics. Building the necessary infrastructure to support a digitized world is the current mandate.

Technology has advanced much in the past century since we were hit by the Spanish Flu pandemic in 1918, and it plays a crucial role in keeping our society functional. From remote working to distance learning, and from telehealth to robot deliveries, our world is set to witness a lasting change post this pandemic.

As with other major and minor events of the past few years, social media is playing a big role in shaping people’s perception of the ongoing pandemic. Not just that, the social media platforms have also contributed to spreading information/misinformation, helping people cope with the strange times, and raising awareness about some pressing issues.

Security Iam Management Tools

Social Media and the pandemic: The Good!

Social media is one of the most effective ways to share news nowadays (it may be the only way for some people), especially if you are trying to alert the masses quickly. First-hand accounts of those who were infected and recovered were available almost in real-time. Scenes of lockdowns from the countries that first imposed it gave us a heads-up on what was due to come. If only we’d paid more heed to it.

With most of the world stuck at home, our mobile devices have increasingly become the go-to option to connect with the outside world. Social media usage has surged during the lockdown, with various apps witnessing a manifold increase in their traffic.

From educating to entertaining, social media platforms have stepped up as well. Movie and video streaming apps have redefined movie/video watching behavior by introducing features that allow users to host long-distance movie nights with friends and family.

We also witnessed a surge in various ‘online challenges’ that people could do in their homes and upload online. While some may view them as naïve, experts claim these are part of the various coping mechanisms for people.

Social media surfing has gained a significant share in the pie of leisure activities. Be honest, how many of us living alone are doing anything but scrolling these apps in our free time? But thanks to the social media ‘influencers’, scores of us are being motivated to workout at home, eat healthily, pick up a book, or learn something new.

Posts from health workers and others on the frontline have also helped spread the word on the difficulties they’re facing and rallied efforts to help them.

Online solidarity has spilled over offline as well. People are taking to social media to offer support in any way they can, such as picking up groceries for those who are unable to leave home or sharing information on how to support local businesses who are struggling. Communities are rallying together to support organizations and individuals by opening fundraisers to a larger audience.

Social Media and COVID-19: The Bad

Unfortunately, the impact of social media has not been all good. News on social media spreads fast, fake news even faster. Misinformation can cause panic, and can even turn out to be fatal on health issues. As a practice, we should all do a bit of research and validate the information from ‘reputed sources’ before sharing it.

This next bit is more of a tip…Whether it’s a business or a personal profile, you should refrain from posting anything that makes fun of, ridicules, or trivializes the situation. Not only is that insensitive, but it could also spell trouble for you, especially as a business.

The ‘influencers’ have been found guilty of misusing their power and taking advantage of the situation. Various inauthentic posts had gone viral before being pulled down. Do social validation and fame know no limits?

It is true that people often turn to social media as a stress-buster, but experts say it is equally stress-inducing for some individuals. It is important to note here that we’re also in the midst of an ‘infodemic’ – an anxiety-triggering over-abundance of information.

It is easy to overlook, especially now, the devastation that mental health issues cause globally. Studies have reported an increase in mental health issues attributed to social media in recent years. Psychologists say the lockdown will only add to that. Needless to say, mental health has a bearing on physical health as well.

Anti-rich sentiments have also gained momentum in the past weeks, as the pandemic makes the class divides glaringly obvious.

Conclusion

From the transparency that we have gained through this current COVID-19 situation, we now understand that we were not prepared to handle it. Many developed countries have had their health systems overwhelmed, those on the frontlines are being overworked and even the most advanced nations are stumbling to get their economies back up. The next pandemic is not a matter of “if it happens”, but “when it happens”.We need to be prepared at an individual and collective level. Indeed, technology has advanced and will continue to advance exponentially, but institutions and societies need to accelerate in adapting to it and continue investing in building the technology systems for the preparedness.

About the Author –

Prabhakar is a recruiter by profession and cricketer by passion. His focus is on hiring for the infra verticle. He hails from a small town in Bihar was brought up in Pondicherry. Prabhakar has represented Pondicherry in the U-19 cricket (National School Games). In his free time, he enjoys reading, working on his health and fitness, and spending time with his family and friends.

Assess Your Organization’s Maturity in Adopting AIOps

IT operations analytics

Anoop Aravindakshan

Artificial Intelligence for IT operations (AIOps) is adopted by organizations to deliver tangible Business Outcomes. These business outcomes have a direct impact on companies’ revenue and customer satisfaction.

A survey from AIOps Exchange 2019, reports that 84% of business owners who attended the survey, confirmed that they are actively evaluating AIOps to be adopted in their organizations.

So, is AIOps just automation? Absolutely NOT!

Artificial Intelligence for IT operations implies the implementation of true Autonomous Artificial Intelligence in ITOps, which needs to be adopted as an organization-wide strategy. Organizations will have to assess their existing landscape, processes, and decide where to start. That is the only way to achieve the true implementation of AIOps.

Every organization trying to evaluate AIOps as a strategy should read through this article to understand their current maturity, and then move forward to reach the pinnacle of Artificial Intelligence in IT Operations.

The primary success factor in adopting AIOps is derived from the Business Outcomes the organization is trying to achieve by implementing AIOps – that is the only way to calculate ROI.

There are 4 levels of Maturity in AIOps adoption. Based on our experience in developing an AIOps platform and implementing the platform across multiple industries, we have arrived at these 4 levels. Assessing an organization against each of these levels, helps in achieving the goal of TRUE Artificial Intelligence in IT Operations.

Level 1: Knee-jerk

Events, logs are generated in silos and collected from various applications and devices in the infrastructure. These are used to generate alerts that are commissioned to command centres to escalate as per the SOPs (standard operating procedures) defined. The engineering teams work in silos, not aware of the business impact that these alerts could potentially create. Here, operations are very reactive which could cost the organization millions of dollars.

Level 2: Unified

All events, logs, and alerts are integrated into one central locale. ITSM processes are unified. This helps in breaking silos and engineering teams are better prepared to tackle business impacts. SOPs have been adjusted since the process is unified, but this is still reactive incident management.

Level 3: Intelligent

Machine Learning algorithms (either supervised or unsupervised) have been implemented on the unified data to derive insights. There are baseline metrics that are calibrated and will be used as a reference for future events. With more data, the metrics get richer. IT operations team can correlate incidents / events with business impacts by leveraging AI & ML. If Mean-Time-To-Resolve (MTTR) an incident has been reduced by automated identification of the root cause, then the organization has attained level 3 maturity in AIOps.

Level 4: Predictive & Autonomous

The pinnacle of AIOps is level 4. If incidents and performance degradation of applications can be predicted by leveraging Artificial Intelligence, it implies improved application availability. Autonomous remediation bots can be triggered spontaneously based on the predictive insights, to fix incidents that are prone to happen in the enterprise. Level 4 is a paradigm shift in IT operations – moving operations entirely from being reactive, to becoming proactive.

Conclusion

As IT operations teams move up each level, the essential goal to keep in mind is the long-term strategy that needs to be attained by adopting AIOps. Artificial Intelligence has matured over the past few decades, and it is up to AIOps platforms to embrace it effectively. While choosing an AIOps platform, measure the maturity of the platform’s artificial intelligent coefficient.

About the Author:

An evangelist of Zero Incident FrameworkTM, Anoop has been a part of the product engineering team for long and has recently forayed into product marketing. He has over 14 years of experience in Information Technology across various verticals, which include Banking, Healthcare, Aerospace, Manufacturing, CRM, Gaming and Mobile.