Understanding Data Fabric

Srinivasan Sundararajan

In the recently announced Technology Trends in Data Management, Gartner has introduced the concept of “Data Fabric”. Here is the link to the document, Top Trends in Data and Analytics for 2021: Data Fabric Is the Foundation (gartner.com).

As per Gartner, the data fabric approach can enhance traditional data management patterns and replace them with a more responsive approach. As it is key for the enterprise data management strategy, let us understand more about the details of data fabric in this article.

What is Data Fabric?

Today’s enterprise data stores and data volumes are growing rapidly. Data fabric aims to simplify the management of enterprise data sources and the ability to extract insights from them. A data fabric has the following attributes:

  • Connects to multiple data sources
  • Provides data discovery of data sources
  • Stores meta data and data catalog information about the data sources
  • Data ingestion capabilities including data transformation
  • Data lake and data storage option
  • Ability to store multi-modal data, both structured and unstructured
  • Ability to integrate data across clouds
  • Inbuilt graph engine to link data for providing complex relationships
  • Data virtualization to integrate with data that need not be physically moved
  • Data governance and data quality management
  • Inbuilt AI/ML engine for providing machine learning capabilities
  • Ability to share the data both within enterprises and across enterprises
  • Easy to configure work-flows without much coding (Low Code environment)
  • Support for comprehensive use cases like Customer 360, Patient 360 and more

As evident, Data Fabric aims to provide a super subset of all the desired data management capabilities under a single unified platform, making it an obvious choice for future of data management in enterprises.

Data Virtualization

While most of the above capabilities are part of existing data management platforms for the enterprise, one important capability that is part of data fabric platform is the data virtualization.

Data virtualization creates a data abstraction layer by connecting, gathering, and transforming data silos to support real-time and near real-time insights. It gives you direct access to transactional and operational systems in real-time whether on-premise or cloud.

The following is one of the basic implementations of data virtualizations whereby an external data source is queried natively without actually moving the data.  In the below example, a Hadoop HDFS  data source is  queried from a data fabric platform such that the external data can be integrated  with other data.

AI Devops Automation Service Tools

While this kind of external data source access it there for a while, data fabric also aims to solve the performance issues associated with the data virtualization. Some of the techniques used by data fabric platforms are:

  • Pushes some computations to the external source to optimize the overall query 
  • Scales out computer resources by providing parallelism

Multi Cloud   

As explained earlier, one another critical capability of data fabric platforms is its ability to integrate data from multi cloud providers. This is at the early stages as different cloud platforms have different architecture and no uniform way of connectivity between them. However, this feature will grow in the coming days.

Advanced Use Cases 

Data fabric should support advanced use cases like Customer 360, Product 360, etc. These are basically comprehensive view of all linkages between enterprise data typically implemented using graph technologies. Since data fabric supports graph databases and graph queries as an inherent feature, these advanced linkages are part of the data fabric platform.

Data Sharing  

Data fabric platforms should also focus on data sharing, not within the enterprise but also across enterprise. While focus on API management helps with data sharing, this functionality has to be enhanced further as data sharing also needs to take care of privacy and other data governance needs.

Data Lakes 

While the earlier platforms similar to data fabric have worked on the enterprise data warehouse as a backbone, data fabric utilizes a data lake as it is the backbone. A data lake is a centralized repository that allows you to store all your structured and unstructured data at any scale. You can store your data as-is, without having to first structure the data, and run different types of analytics – from dashboards and visualizations to big data processing, real-time analytics, and machine learning to guide better decisions.

Data Fabric Players

At the time of writing this article, there are no ratings from Gartner in the form of magic quadrant for Data Fabric Platforms. However, there is a report from Forrester which ranks data fabric platforms in the form of a Forrester Wave.

Some of the key platforms mentioned in that report are:

  • Talend
  • Oracle
  • SAP
  • Denodo Technologies
  • Cambridge Semantics
  • Informatica
  • Cloudera
  • Infoworks

While the detailed explanation and architecture of these platforms can be covered in a subsequent article, a sample building blocks of Talend data fabric platform is illustrated in the below diagram.

AIOps Artificial Intelligence for IT Operations

Enterprises can also think of building their data fabric platform by combining the best of features of various individual components. For example, from the Microsoft ecosystem perspective:

  • SQL Server Big Data Clusters has data virtualization capabilities
  • Azure Purview has data governance and metadata management capabilities
  • Azure Data Lake Storage provides data lake capabilities
  • Azure Cosmos DB has graph database engine
  • Azure Data Factory has data integration features
  • Azure Machine Learning and SQL Server have machine learning capabilities

However, as evident, we are yet to see strong products and platforms in the areas of multi cloud data management, especially data virtualization across cloud providers in a performance focused manner.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi-Modal databases, Blockchain, and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoperability) while bringing more trust and transparency into the healthcare process using patient consent management, credentialing, and zero-knowledge proofs.

#EmpathyChallenge – 3 Simple Ways to Practice Empathy Consciously

Padma Ravichandran

A pertinent question for the post COVID workforce is, can empathy be learnt? Should it be practiced only by the leaders, or by everyone – can it be seamlessly woven into the fabric of the organization? We are seeing that dynamics at play for remote teams is little unpredictable, making each day uniquely challenging. Empathy is manifested through mindful behaviours, where one’s action is recognized as genuine, personal, and specific to the situation. A few people can be empathetic all the time, a few, practice it consciously, and a few are unaware of it.

Empathy is a natural human response that can be practiced by everyone at work for nurturing an environment of trust. We often confuse empathy for sympathy – while sympathy is feeling sorry for one’s situation, empathy is understanding one’s feelings and needs, and putting the effort to offer authentic support. It requires a shift in perspective, and building trust, respect, and compassion at a deeper level. As Satya Nadella, CEO, Microsoft says, “Empathy is a muscle that needs to be exercised.”

Here are three ways to consciously practice empathy at work –

  • Going beyond yourself

It takes a lot to forget how we feel that day, or what is priority for us. However, to be empathetic, one needs to be less judgemental. When one is consciously practicing empathy, one needs to be patient with yourself, your thoughts, and not compare yourself with the person you are empathizing with. If we get absorbed by our own needs, it gets difficult to be generous and compassionate. We need to remember empathy leads to influence and respect, and for that we should not get blind sighted by our perceptions.

  • Being a mindful and intentional listener

While practicing empathy, one has refrain from criticism, and be mindful of not talking about one’s problems. We may get sympathetic and give unsolicited advice. Sometimes it only takes to be an intentional listener, by avoiding distractions, and having a very positive body language, and demeanour. This will enable us to ask right questions and collaborate towards a solution.

  • Investing in the person

Very often, we support our colleagues and co-workers by responding to their email requests. However, by building positive workplace relationships, and knowing the person beyond his/her email id, makes it much easier to foster empathy. Compassion needs to be not just in words, but in action too, and that can happen only by knowing the person. Taking interest in a co-worker or a team member, beyond a professional capability, does not come out of thin air. It takes conscious continuous efforts to get to know the person, showing care and concern, which will help us to relate to the myriad challenges they go through – be it chronic illness, child care that correlates to his/her ability to engaged at work. It will enable us to personalize the experience, and see the person’s point of view, holistically.

When we take that genuine interest in how we make others feel and experience, we start mindfully practicing empathy. Empathy fosters respect. Empathy helps resolves conflicts better, empathy builds stronger teams, empathy inspires one another to work towards collective goals, and empathy breaks authority. Does it take that extra bit of time to consciously practice it? Yes, but it is all worth it.

References

About the Author –

Padma is intrigued by Organization Culture and Behavior at workplace that impact employee experience. She is also passionate about driving meaningful initiatives for enabling women to Lean In, along with her fellow Sheroes. She enjoys reading books, journaling, yoga and learning more about life through the eyes of her 8-year-old son.

Balancing Management Styles for a Remote Workforce

Ashish Joseph

Operational Paradigm Shift

The pandemic has indeed impelled organizations to rethink the way they approach traditional business operations. The market realigned businesses to adapt to the changing environment and optimize their costs. For the past couple of months, nearly every organization implemented work for home as a mandate. This shift in operations had both highs and lows in terms of productivity. Almost a year into the pandemic, the impacts are yet to be fully understood. The productivity realized from the remote workers, month on month, shaped the policies and led to investments in different tools that aided collaboration between teams. 

Impact on Delivery Centers

Technology companies have been leading the charge towards remote working as many have adopted permanent work from home options for their employees. While identifying cost avenues for optimization, office space allocation and commuting costs are places where redundant operational cash flow can be invested to other areas for scaling.

The availability and speed of internet connections across geographies have aided the transformation of office spaces for better utilization of the budget. Considering the current economy, office spaces are becoming expensive and inefficient. The Annual Survey by JLL Enterprises in 2020 reveals that organizations spend close to $10,000 on global office real estate cost per employee per year on an average. As offices have adopted social distancing policies, the need for more space per employee would result in even higher costs during these pandemic operations. To optimize their budgets, companies have reduced their allocation spaces and introduced regional contractual sub-offices to reduce the commute expenses of their employees in the big cities. 

With this, the notion of a 9-5 job is slowly being depleted and people have been paid based on their function rather than the time they spend at work. The flexibility of working hours while linking their performance to their delivery has seen momentum in terms of productivity per resource. An interesting fact that arose out of this pandemic economy is that the number of remote workers in a country is proportional to the country’s GDP. A work from home survey undertaken by The Economist in 2020 finds that only 11% of work from home jobs can be done in Cambodia, 37% in America, and 45% in Switzerland. 

The fact of the matter is that a privileged minority has been enjoying work from home for the past couple of months. While a vast majority of the semi-urban and rural population don’t have the infrastructure to support their functional roles. For better optimization and resource utilization, India would need to invest heavily in these resources to catch up on the deficit GDP from the past couple of quarters.

Long-term work from home options challenges the foundational fabric of our industrial operations. It can alter the shape and purpose of cities, change workplace gender distribution and equality. Above all, it can change how we perceive time, especially while estimating delivery. 

Overall Pulse Analysis

Many employees prefer to work from home as they can devote extra time to their family. While this option has been found to have a detrimental impact on organizational culture, creativity, and networking. Making decisions based on skewed information would have an adverse effect on the culture, productivity, and attrition. 

To gather sufficient input for decisions, PWC conducted a remote work survey in 2020 called “When everyone can work from home, what’s the office for“. Here are some insights from the report

IT Infrastructure Managed Services

IT Operations Management Software

Many businesses have aligned themselves to accommodate both on-premise and remote working model. Organizations need to figure out how to better collaborate and network with employees in ways to elevate the organization culture. 

As offices are slowly transitioning to a hybrid model, organizations have decentralized how they operate. They have shifted from working in a common centralized office to contractual office spaces as per employee role and function, to better allocate their operational budget. The survey found that 72% of the workers would like to work remotely at least 2 days a week. This showcases the need for a hybrid workspace in the long run. 

Maintaining & Sustaining Productivity

During the transition, keeping a check on the efficiency of remote workers was prime. The absence of these checks would jeopardize the delivery, resulting in a severe impact on customer satisfaction and retention.

Managed Security Services Providers USA

This number however, could be far less if the scale of the survey was higher. This in turn signifies that productivity is not uniform and requires course corrective action to maintain the delivery. An initial approach from an employee’s standpoint would result in higher results. The measures to help remote workers be more productive were found to be as follows.

Master Data Management Software Tools

Many employees point out that greater flexibility of working hours and better equipment would help increase work productivity.

Most of the productivity hindrances can be solved by effective employee management. How a particular manager supervises their team members has a direct correlation towards their productivity and satisfaction to the project delivery. 

Theory X & Theory Y

Theory X and Theory Y were introduced by Douglas McGregor in his book, “The Human Side of Enterprise”. He talks about two styles of management in his research – Authoritarian (Theory X) and Participative (Theory Y). The theory heavily believes that Employee Beliefs directly influence their behavior in the organization. The approach that is taken by the organization will have a significant impact on the ability to manage team members. 

For theory X, McGregor speculates that “Without active intervention by management, people would be passive, even resistant to organizational needs. They must therefore be persuaded, rewarded, punished, controlled and their activities must be directed”

Microsoft Cloud Solution Provider

Work under this style of management tends to be repetitive and motivation is done based on a carrot and stick approach. Performance Appraisals and remuneration are directly correlated to tangible results and are often used to control staff and keep tabs on them. Organizations with several tiers of managers and supervisors tend to use this style. Here authority is rarely delegated, and control remains firmly centralized. 

Even though this style of management may seem outdated, big organizations find it unavoidable to adopt due to the sheer number of employees on the payroll and tight delivery deadlines.

When it comes to Theory Y, McGregor firmly believes that objectives should be arranged so that individuals can achieve their own goals and happily accomplish the organization’s goal at the same time.

Remote Infrastructure Monitoring Services

Organizations that follow this style of management would have an optimistic and positive approach to people and problems. Here the team management is decentralized and participative.

Working under such organizational styles bestow greater responsibilities on employees and managers encourage them to develop skills and suggest areas of improvement. Appraisals in Theory Y organizations encourage open communication rather than to exercise control. This style of management has been popular these days as it results in employees wanting to have a meaningful career and looking forward to things beyond money.

Balancing X over Y

Even though McGregor suggests that Theory Y is better than Theory X. There are instances where managers would need to balance the styles depending upon how the team function even post the implementation of certain management strategies. This is very important from a remote working context as the time for intervention would be too late before it impacts the delivery. Even though Theory Y comprises creativity and discussion in its DNA, it has its limitations in terms of consistency and uniformity. An environment with varying rules and practices could be detrimental to the quality and operational standards of an organization. Hence maintaining a balance is important.

When we look at a typical cycle of Theory X, we can find that the foundational beliefs result in controlling practices, appearing in employee resistance which in turn delivers poor results. The results again cause the entire cycle to repeat, making the work monotonous and pointless. 

Rpa in Infrastructure Management

Upon the identification of resources that require course correction and supervision, understanding the root cause and subsequently adjusting your management style to solve the problem would be more beneficial in the long run. Theory X must only be used in dire circumstances requiring a course correction. The balance where we need to maintain is on how far we can establish control to not result in resistance which in turn wouldn’t impact the end goal.

Security Iam Management Tools

Theory X and Theory Y can be directly correlated to Maslow’s hierarchy of Needs. The reason why Theory Y is superior to Theory X is that it focuses on the higher needs of the employee than their foundational needs. The theory Y managers gravitate towards making a connection with their team members on a personal level by creating a healthier atmosphere in the workplace. Theory Y brings in a pseudo-democratic environment, where employees can design, construct and publish their work in accordance with their personal and organizational goals.

When it comes to Theory X and Theory Y, striking a balance will not be perfect. The American Psychologist Bruce J Avolio, in his paper titled “Promoting more integrative strategies for leadership theory-building” speculates, “Managers who choose the Theory Y approach have a hands-off style of management. An organization with this style of management encourages participation and values an individual’s thoughts and goals. However, because there is no optimal way for a manager to choose between adopting either Theory X or Theory Y, it is likely that a manager will need to adopt both approaches depending on the evolving circumstances and levels of internal and external locus of control throughout the workplace”.

The New Normal 3.0

As circumstances keep changing by the day, organizations need to adapt to the rate at which the market is changing to envision new working models that take human interactions into account as well. The crises of 2020 made organizations build up their workforce capabilities that are critical for growth. Organizations must relook at their workforce by reskilling them in different areas of digital expertise as well as emotional, cognitive, and adaptive skills to push forward in our changing world.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs two independent series called BizPective & The Inside World, focusing on breaking down contemporary business trends and Growth strategies for independent artists on his website www.ashishjoseph.biz

Outside work, he is very passionate about basketball, music, and food.

Gender Microaggressions: Invisible Discrimination at Workplace

Priyanka Pandey

A 2020 headline read, ‘The number of female CEOs in the Fortune 500 hits an all-time record’. It sounds like a great news until you start reading further. Only 37 of the 500 companies on the list were led by female CEOs which is just 7.4%. But it also marks a considerable jump from its preceding years’ rates which were 6.6% in 2019 and just 4.8% in 2018, i.e., 33 and 24 companies respectively. Another report by McKinsey & Co. on the advancing of women’s equality in the Asia-Pacific region, tells us that just around 25% of India’s workforce is female, and only 5% of them make it to the top. This decline in percentage is due to many women dropping out of their jobs. One of the major factors for women to take this decision is ‘sexism at the workplace’.

It has made its way into the ‘work-from-home’ world as well. Imagine this scenario: In a discussion about hiring employees for a new project, a male committee member says, “I think we should hire more men as this project requires spending extra time and effort“. In this case, it is not very difficult to identify the prejudice. But let’s consider another scenario- there is a need to move some machines for which a person asks for help saying, “I need a few strong men to help me lift this“. Most of the time people will not realize how problematic this statement is. This is an example of ‘gender microaggression’. But what exactly is a microaggression? Microaggression is verbal or nonverbal behavior that, intentionally or unintentionally, can communicate denigratory behavior towards the members of a minority/oppressed group which often goes unnoticed and unreported. In simple words, it is a form of discrimination that is subtle yet harmful. There are mainly 3 forms of Microaggressions: microassaults (purposeful discriminatory actions), microinsults (communicate a covert insulting message), and microinvalidations (dismiss the thoughts of certain groups). Different kinds of gender microaggressions are sexual objectification, second-class citizenship, use of sexist language, assumption of inferiority, restrictive gender roles, invisibility, sexist humor/jokes. According to Australia’s sex discrimination commissioner, Kate Jenkins, people typically don’t raise their voice against everyday sexism because it can be seen as too small to make a fuss about, but it matters. As the Women in the Workplace report also reflects, “Microaggressions can seem small when dealt with one by one. But when repeated over time, they can have a major impact.”

Let’s go back to the above example for people who could not identify what was wrong in that statement. When people use phrases like ‘strong men’, it tells that only men are strong and conversely, that women are weak. This statement does not have to be focused on gender at all. It can be rephrased as “I need a few strong people to help me lift this“, and people around can determine for themselves who the strong helpers will be. Few other examples of common gender-related microaggressions are:

  • Mansplaining – Explaining a subject to a woman in a condescending, overconfident, and often oversimplified manner with a presumption that she wouldn’t know about it.
  • Manterrupting – Unnecessary interruption of a woman by a man whenever she is trying to convey her ideas or thoughts.
  • Bropropriating – A man taking a woman’s idea and showing it as his own hence, taking all the credit for it.
  •  ‘Boys will be boys’ – A phrase used to dismiss any traditionally masculine behavior and not holding men accountable for their wrong deeds.
  • Using differentiated words when describing women and men, such as ‘Bossy’ versus ‘Leader’, ‘Annoying’ versus ‘Passionate’.

The pandemic has given way to a new surge of microaggressions for working women. A law firm Slater and Gordon conducted a poll of 2,000 remote workers and found that 35% of women reported experiencing at least one sexist demand from their employer since the lockdown started. For video conferences, some women were asked to wear more make-up or do something to their hair, while others were asked to dress more provocatively. Their bosses also tried to justify this by saying it could ‘help win business’, or it was important to ‘look nice for the team’. Nearly 40% said these demands were targeted at women, rather than equally with their male peers. Also, a lot of women are being micromanaged by their managers while their male colleagues are not. This sends a message of distrust towards them. Researches have indicated that experiences with these microaggressions, and many others not mentioned above, are related to a negative impact on the standard of living, physical health as well as psychological health, such as unequal wages, migraines, heart disease, depression, anxiety, and body image dissatisfaction. As a result, women who experience such insidious, everyday forms of sexist discrimination, are three times more likely to regularly think about leaving the organization. Hence, sexism can not only impact the individual but also the overall performance and working culture of the organization. Eliminating such behavior at the physical and virtual workplace is extremely important and will enable the organization to break down the barriers for equal access to different career opportunities for leadership for women and will help include diverse thinking, perspectives, and experiences in the workplace at every level. As an individual, the most basic yet effective thing to do would be to develop an honest awareness of our own biases and stereotypes.

Unless we tackle everyday sexism, the most innovative policies and initiatives designed to advance gender equality and inclusive and effective organisations will not deliver the change we need.” – Kate Jenkins

Here’s a small story of grace and grit which might inspire some, to take a stand against such gender-related microaggressions. Back in the 1970s, when feminism was a word unheard of, an incident took place. A woman saw a job advertisement by a telecom company, which said it required only male engineers. On seeing this requirement, she wrote back a postcard to the company’s Chairman questioning the gender biases. She was then called for a special interview, where they told her their side of the story – “We haven’t hired any women so far”. To which she replied, “You must start from somewhere.” Her name was Sudha Murty, who is now Chairperson of Infosys Foundation.

So, the next time when conversing with a colleague, consider all of this and be kind!

About the Author –

Priyanka is an ardent feminist and a dog-lover. She spends her free time cooking, reading poetry, and exploring new ways to conserve the environment.

Vision for 2021

Sumit Ganguli

CEO, GAVS Technologies

God, grant me the serenity to accept things, I cannot change,

Courage to change the things I can,

And the wisdom to know the difference.

The events of 2020 have reaffirmed in me the ethos conveyed by this stanza, from the Serenity Prayer.

For us, COVID has been up close and personal. One of our key clients, Bronx Care Hospital has been an epicenter of the pandemic in New York City. The doctors, staff and support staff, including GAVS’ IT support engineers have experienced the devastating effect of this pandemic, up close and personal. GAVS’ technical team supported the ICUs and patient care units at the hospitals during the peak of pandemic.

“Every day we witness these heroic acts: one example out of many this week was our own Kishore going into our ICU to move a computer without full PPE (we have a PPE shortage). The GAVS technicians who come into our hospital every day are, like our doctors and healthcare workers, the true heroes of our time.”

Ivan Durbak, CIO, BronxCare Health System

“GAVS Team was instrumental in assisting the deployment of digital contact less care solutions and remote patient monitoring solutions during the peak of COVID. Their ability to react in quick time really helped us save more lives than what we could have, with technology at the fore-front.”

Dr. Sridhar Chilimuri, Chairman, Dept. of Medicine, BronxCare Health System

The alacrity with which our colleagues in India addressed the remote working situation and the initiative that they have demonstrated in maintaining business continuity for the clients in the US have inspired us at GAVS and  have reaffirmed our belief that we are on the way to create a purposeful company.  

The biggest learning from 2020, is that we need to be mindful of the fragility of life and truly make every day count. At GAVS, we are committed to use technology and service for the betterment of our clients and our stakeholders; and anchor this with our values of Respect, Integrity, Trust and Empathy.

The year was not without some positives. Thanks to some new client acquisitions and renewed contracts we have been able to significantly expand the GAVS family and have registered a 40% growth in revenue. 

We have formed Long 80, A GAVS & Premier, Inc. JV and have started reaching out to Healthcare providers in the US. We are reaching out to some of the largest hospitals in North America offering our AI-based Infrastructure Managed Services, Cybersecurity solutions, Prescriptive and Predictive Healthcare Solutions based on Analytics.

“Moving from a vendor-only model with GAVS to a collaborative model through Long 80 expands Premier’s current technology portfolio, enabling us to offer GAVS’ technology, digital transformation and data security services and solutions to US healthcare organizations. We are extremely excited about this opportunity and look forward to our new relationship with GAVS.”

Leigh Anderson, President, Performance Services, Premier, Inc.

This year, we see the Premier team growing by an additional 120 persons to continue to support their initiative to reduce costs, improve efficiency, enhance productivity and faster time to market.

We aim to hit some milestones in our journey of enabling AI-driven Digital Transformation in the Healthcare space. We have constituted a team dedicated to achieving that.

We are contemplating on establishing the GAVS Healthcare Institute in partnership leading institutions in India and US to develop competency within GAVS in the latest technologies for the healthcare space.

GAVS is committed to being a company focused on AI, and newer technologies and promote GAVS’ AI led Technology Operations, Zero Incident Framework. In 2021, we will work on increasing our ZIF sites around the US and India.  

Based on inputs from our Customer Advisory Board, we at GAVS would like to build a competency around Client Relationship and empower our Client Success Managers to evolve as true partners of our Clients and support their aspirations and visions.  

GAVS is also making strong progress in the BFS sector and we would like to leverage our expertise in AI, Blockchain, Service Reliability and other digital technologies.

GAVS has the competency to support multiyear contracts and there will be a push to reach out to Sourcing Companies, Influencers and partners to garner these long-term predictable business.

We will continue to build competency and expertise around Innovation, and there are some initiatives that we will be putting in place to promote a Culture of Innovation and have measurable successes under Novelty of Innovation.

Our experience of 2020 has inspired us to once again remind ourselves that we should make GAVS an aspirational company, a firm that is purposeful and anchored with our values.

Container Security

Anandharaj V

We live in a world of innovation and are beneficiaries of new advancements. New advancements in software technology also comes with potential security vulnerabilities.

‘Containers’ are no exception. Let us first understand what a container is and then the vulnerabilities associated with it and how to mitigate them.

What is a Container?

You might have seen containers in the shipyard. It is used to isolate different cargos which is transported via ships. In the same way, software technologies use a containerization approach.

Containers are different from Virtual Machines (VM) where VMs need a guest operating system which runs on a host operating system (OS). Containers uses OS virtualization, in which required processes, CPU, Memory, and disk are virtualized so that containers can run without a separate operating system.

In containers, software and its dependencies are packaged so that it can run anywhere whether on-premises desktop or in the cloud.

IT Infrastructure Managed Services

Source: https://cloud.google.com/containers

As stated by Google, “From Gmail to YouTube to Search, everything at Google runs in containers”.

Container Vulnerabilities and Countermeasures

Containers Image Vulnerabilities

While creating a container, an image may be patched without any known vulnerabilities. But a vulnerability might have been discovered later, while the container image is no longer patched. For traditional systems, it can be patched when there is a fix for the vulnerability without making any changes but for containers, updates should be upstreamed in the images, and then redeployed. So, containers have vulnerabilities because of the older image version which is deployed.

Also, if the container image is misconfigured or unwanted services are running, it will lead to vulnerabilities.

Countermeasures

If you use traditional vulnerability assessment tools to assess containers, it will lead to false positives. You need to consider a tool that has been designed to assess containers so that you can get actionable and reliable results.

To avoid container image misconfiguration, you need to validate the image configuration before deploying.

Embedded Malware and Clear Text Secrets

Container images are collections of files packaged together. Hence, there are chances of malicious files getting added unintentionally or intentionally. That malicious software will have the same effect as of the traditional systems.

If secrets are embedded in clear text, it may lead to security risks if someone unauthorized gets access.

Countermeasures

Continuous monitoring of all images for embedded malware with signature and behavioral detection can mitigate embedded malware risks.

 Secrets should never be stored inside of containers image and when required, it should be provided dynamically at runtime.

Use of Untrusted Images

Containers have the advantages of ease of use and portability. This capability may lead teams to run container images from a third party without validating it and thus can introducing data leakage, malware, or components with known vulnerabilities.

Countermeasures

Your team should maintain and use only trusted images, to avoid the risk of untrusted or malicious components being deployed.

Registry Risks

Registry is nothing but a repository for storing container images.

  1. Insecure connections to registries

Images can have sensitive information. If connections to registries are performed over insecure channels, it can lead to man-in-the-middle attacks that could intercept network traffic to steal programmer or admin credentials to provide outdated or fraudulent images.

You should configure development tools and containers while running, to connect only over the encrypted medium to overcome the unsecured connection issue.

  1. Insufficient authentication and authorization restrictions

As we have already seen that registries store container images with sensitive information. Insufficient authentication and authorization will result in exposure of technical details of an app and loss of intellectual property. It also can lead to compromise of containers.

Access to registries should authenticated and only trusted entities should be able to add images and all write access should be periodically audited and read access should be logged. Proper authorization controls should be enabled to avoid the authentication and authorization related risks.

Orchestrator Risks

  1. Unbounded administrative access

There are many orchestrators designed with an assumption that all the users are administrators but, a single orchestrator may run different apps with different access levels. If you treat all users as administrators, it will affect the operation of containers managed by the orchestrator.

Orchestrators should be given the required access with proper role-based authorization to avoid the risk of unbounded administrative access.

  1. Poorly separated inter-container network traffic

In containers, traffic between the host is routed through virtual overlay networks. This is managed by the orchestrator. This traffic will not be visible to existing network security and management tools since network filters only see the encrypted packets traveling between the hosts and will lead to security blindness. It will be ineffective in monitoring the traffic.

To overcome this risk, orchestrators need to configure separate network traffic as per the sensitivity levels in the virtual networks.

  1. Orchestrator node trust

You need to give special attention while maintaining the trust between the hosts, especially the orchestrator node. Weakness in orchestrator configuration will lead to increased risk. For example, communication can be unencrypted and unauthenticated between the orchestrator, DevOps personnel, and administrators.

To mitigate this, orchestration should be configured securely for nodes and apps. If any node is compromised, it should be isolated and removed without disturbing other nodes.

Container Risks

  1. App vulnerabilities

It is always good to have a defense. Even after going through the recommendations, we have seen above; containers may still be compromised if the apps are vulnerable.

As we have already seen that traditional security tools may not be effective when you use it for containers. So, you need a container aware tool which will detect behavior and anomalies in the app at run time to find and mitigate it.

  1. Rogue containers

It is possible to have rogue containers. Developers may have launched them to test their code and left it there. It may lead to exploits as those containers might not have been thoroughly checked for security loopholes.

You can overcome this by a separate environment for development, test, production, and with a role-based access control.

Host OS Risks

  1. Large attack surface

Every operating system has its attack surface and the larger the attack surface, the easier it will be for the attacker to find it and exploit the vulnerability and compromise the host operating system and the container which run on it.

You can follow the NIST SP 800-123 guide to server security if you cannot use container specific operating system to minimize the attack surface.

  1. Shared kernel

If you only run containers on a host OS you will have a smaller attack surface than the normal host machine where you will need libraries and packages when you run a web server or a database and other software.

You should not mix containers and non-containers workload on the same host machine.

If you wish to further explore this topic, I suggest you read NIST.SP.800-190.


References

About the Author –

Anandharaj is a lead DevSecOps at GAVS and has over 13 years of experience in Cybersecurity across different verticals which include Network Security, application Security, computer forensics and cloud security.

The DNA of a Good Leader (PART I)

Rajeswari S

In our lives, we would have come across some people with great leadership qualities. They may not be leading a team, or an organization, but they exude an aura. They conduct themselves in a manner that sets them apart from the rest. As the debate rages on whether leaders are born, made, discovered, innovated, invented!? Let’s see what makes a person a true and admirable leader.

Generally, a good leader should be successful, progressive, and positive, must possess good personality traits, communication and delegation skills, charisma, agility, adaptability, and ability to transform the air around them by effecting positive changes.

Some people are able to bring out the best in others and that is the edge they have over others. So, let’s look beyond and list out those qualities that makes a person or YOU a quintessential leader.

  1. Be passionate: Obviously, you would think it is the dedication, commitment for one’s work to up the number of clients, revenue figures, etc. However, it is not just about that. The passion that you have which affects not only your attitude and energy but that of those around you. Your passion should spread like a wildfire and inspire action and positive change among others.

  1. Face obstacles with grace: If any leader knows exactly what a customer or market truly wants from the business, they would be hailed as no less than a God! But alas, life is always full of obstacles, and a true leader knows which battles to fight and how. Effective leaders approach roadblocks with a high level of positivity and maturity. They adopt creative problem-solving techniques that allows them to overcome situations that others might give up on.
  1. Allow honest mistakes, spot talents: An over-protected child learns nothing and cannot sail against the tides. A good leader allows their people to just GO FOR IT! Failure often provides us with some of life’s biggest learning opportunities. As uncertainty and risk are inherent to running a team or business. Some people do commendable jobs under high pressure situations. A good leader spots such resources in their team and makes the best use of their qualities.
  1. Be street smart: It’s hard to find a substitute for old-fashioned street smarts. Knowing how to trust your gut, quickly analyzing situations as well as the people you’re dealing with and knowing how-to spot a bad deal or scammer is an important aspect of leadership. Maturity and experience complement each other, and a perfect combination of this makes a great leader.
  1. Be intuitive and take ownership: Intuition is to art as logic is to math. Leadership is often about following your gut instinct. It can be difficult to let go of logic in some situations but learn to trust yourself. Having said that, if your instinct fails, leadership is also about taking ownership for what happened, learning lessons from it and NEVER TO REPEAT THE SAME MISTAKE.
  1. Understand opportunity cost: Leaders know that many situations and decisions in business involve risk and there is an opportunity cost associated with every decision you make. An opportunity cost is the cost of a missed opportunity. This is usually defined in terms of money, but it may also be considered in terms of time, man-hours, or any other finite resource. Great leaders understand the consequences of their decisions before making them.
  1. Be liked: You can respect a person who talks flamboyantly, has a brilliant mind, impeccable manners, and business skills, but do you LIKE them? A leader should not only be respected but they should also be liked. Liking a person is a not a quantifiable quality, is it? But, it can be achieved in the way a leader captains the team, spreads a positive feeling among them and make the group feel that they belong there.
  1. Laugh: Yes…you read it right. The proven routes to a person’s mind or heart is a healthy sense of humor. It works well in getting the best out of your team. Nobody likes a templated talk or expression, even if it is good news you are trying to convey. Also, effective leaders can laugh at themselves as they understand that they are also humans and can make mistakes like everyone else. Leaders who take themselves too seriously risk alienating people.

Unique brands of Leadership

A quick look at some successful CEOs, new-age entrepreneurs, and their unique leadership mantras:

  1. Satya Nadella, CEO, Microsoft

Leadership mantra: 

  • An avid reader
  • Looks beyond the Horizon
  • Makes the right move at the right time
  • Makes every second count
  • Nurture strong company culture 
  1. Nitin Saluja and Raghav Verma, Founder, Chaayos, fastest growing tea startup of India,

Leadership mantra: Give people wings to fly and they will carve out their own journey.

  1. Mukesh Ambani, Chairman & Managing director, Reliance Industries Ltd

Leadership mantra:

  • Money is not everything but important
  • Have a dream and plan to fulfill it
  • Let your work speak for itself  
  • Trust your instincts
  • Trust all, but depend on none

References:

  • https://briandownard.com,
  • https://economictimes.indiatimes.com

About the Author –

Working in IP, into Content Development with 13 years of Technical, Content and Creative Writing background. Off-work, passionate about singing, music, creative writing; love highway drive, a movie buff.

IAST: A New Approach to Finding Security Vulnerabilities

Roberto Velasco
CEO, Hdiv Security

One of the most prevalent misconceptions about cybersecurity, especially in the mainstream media and also among our clients, is that to conduct a successful attack against an IT system it is necessary to ‘investigate’ and find a new defect in the target’s system.

However, for most security incidents involving internet applications, it is enough to simply exploit existing and known programming errors.

For instance, the dramatic Equifax breach could have been prevented by following basic software security best-practices, such as patching the system to prevent known vulnerabilities. That was, in fact, one of the main takeaways from the forensic investigation led by the US federal government.

One of the most important ways to reduce security risks is to ensure that all known programming errors are corrected before the system is exposed to internet traffic. Research bodies such as the US NIST found that correcting security bugs early on is orders of magnitude cheaper than doing so when the development has been completed.

When composing a text in a text editor, the spelling and grammar corrector highlights the mistakes in the text. Similarly, there are security tools known as AST (Application Security Testing) that find programming errors that introduce security weaknesses. ASTs report the file and line where the vulnerability is located, in the same way, that a text editor reports the page and the line that contains a typo.

In other words, these tools allow developers to build software that is largely free of security-related programming errors, resulting in more secure applications.

Just like it is almost impossible to catch all errors in a long piece of text, most software contains many serious security vulnerabilities. The fact that some teams do not use any automated help at all, makes these security weaknesses all the most prevalent and easy to exploit.

Let’s take a look at the different types of security issue detection tools also known as ASTs, or vulnerability assessment tools, available in the market.

The Traditional Approach

Two mature technologies capture most of the market: static code analysis (SAST) and web scanners (dynamic analysis or DAST). Each of these two families of tools is focused on a different execution environment.

The SAST static analysis, also known as white-box analysis because the tool has access to the source code of the application, scans the source code looking for known patterns that indicate insecure programming that could lead to a vulnerability.

The DAST dynamic analysis replicates the view of an attacker. At this point, the tool executes hundreds or thousands of queries against the application designed to replicate the activity of an attacker to find security vulnerabilities. This is a black-box analysis because the point of view is purely external, with no knowledge of the application’s internal architecture.

The level of detail provided by the two types of tools is different. SAST tools provide file and line where the vulnerability is located, but no URL, while DAST tools provide the external URL, but no details on the location of the problem within the code base of the application. Some teams use both tools to improve visibility, but this requires long and complex triaging to manage the vulnerabilities.

The Interactive AST Approach

The Interactive Application Security Testing (IAST) tools combine the static approach and the dynamic approach. They have access to the internal structure of the application, and to the way it behaves with actual traffic. This privileged point of view is ideal to conduct security analysis.

From an architecture point of view, the IAST tools become part of the infrastructure that hosts the web applications, because an IAST runs together with the application server. This approach is called instrumentation, and it is implemented by a component known as an agent. Other platforms such as Application Performance Monitoring tools (APMs) share this proven approach.

Once the agent has been installed, it incorporates automatic security sensors in the critical execution points of the application. These sensors monitor the dataflow between requests and responses, the external components that the application includes, and data operations such as database access. This broad-spectrum coverage is much better than the visibility that SAST and DAST rely on.

In terms of specific results, we can look at two important metrics – how many types of vulnerabilities the tool finds, and how many of the identified vulnerabilities are false positives. Well, the best DAST is able to find only 18% of the existing vulnerabilities on a test application. And even worse, around 50% of the vulnerabilities reported by the best SAST static analysis tool are not true problems!

IT Automation with AI

Source: Hdiv Security via OWASP Benchmark public result data

The IAST approach provides these tangible benefits:

  1. Complete coverage, because the entire application is reviewed, both the custom code and the external code, such as open-source components and legacy dependencies.
  2. Flexibility, because it can be used in all environments; development, quality assurance (QA), and production.
  3. High accuracy, because the combination of static and dynamic point of views allow us to find more vulnerabilities with no false positives.
  4. Complete vulnerability information, including the static aspects (source code details) and dynamic aspects (execution details).
  5. Reduction of the duration of the security verification phase, so that the time-to-market of the secure applications is shorter.
  6. Compatible with agile development methodologies, such as DevSecOps, because it can be easily automated, and reduces the manual verification activities

IAST tool can add tons of value to the security tooling of any organization concerned with the security of the software.

In the same way that everyone uses an automated spell checker to find typos in a document, we believe that any team would benefit from an automated validation of the security of an application.

However, the AST does not represent a security utopia, since they can only detect security problems that follow a common pattern.

About the Author –

Roberto Velasco is the CEO of Hdiv Security. He has been involved with the IT and security industry for the past 16 years and is experienced in software development, software architecture and application security across different sectors such as banking, government and energy. Prior to founding Hdiv Security, Roberto worked for 8 years as a software architect and co-founded ARIMA, a company specialized in software architecture. He regularly speaks at Software Architecture and cybersecurity conferences such as Spring I/O and APWG.eu.

Quantum Computing

Vignesh Ramamurthy

Vignesh Ramamurthy

In the MARVEL multiverse, Ant-Man has one of the coolest superpowers out there. He can shrink himself down as well as blow himself up to any size he desires! He was able to reduce to a subatomic size so that he could enter the Quantum Realm. Some fancy stuff indeed.

Likewise, there is Quantum computing. Quantum computers are more powerful than supercomputers and tech companies like Google, IBM, and Rigetti have them.

Google had achieved Quantum Supremacy with its Quantum computer ‘Sycamore’ in 2019. It claims to perform a calculation in 200 seconds which might take the world’s most powerful supercomputer 10,000 years. Sycamore is a 54-qubit computer. Such computers need to be kept under special conditions with temperature being close to absolute zero.

quantum computing

Quantum Physics

Quantum computing falls under a discipline called Quantum Physics. Quantum computing’s heart and soul resides in what we call as Qubits (Quantum bits) and Superposition. So, what are they?

Let’s take a simple example, imagine you have a coin and you spin it. One cannot know the outcome unless it falls flat on a surface. It can either be a head or a tail. However, while the coin is spinning you can say the coin’s state is both heads and tails at the same time (qubit). This state is called Superposition.

So, how do they work and what does it mean?

We know bits are a combination of 0s and 1s (negative or positive states). Qubits have both at the same time. These qubits, in the end, pass through something called “Grover Operator” which washes away all the possibilities, but one.

Hence, from an enormous set of combinations, a single positive outcome remains, just like how Doctor Strange did in the movie Infinity War. However, what is important is to understand how this technically works.

We shall see 2 explanations which I feel could give an accurate picture on the technical aspect of it.

In Quantum Mechanics, the following is as explained by Scott Aaronson, a Quantum scientist from the University of Texas, Austin.

Amplitude – an amplitude of a positive and a negative state. These could also be considered as an amplitude for being 0, and also an amplitude for being 1. The goal for an amplitude here is to make sure that amplitudes leading to wrong answers cancel each other out. Hence this way, amplitude with the right answer remains the only possible outcome.

Quantum computers function using a process called superconductivity. We have a chip the size of an ordinary computer chip. There are little coils of wire in the chip, nearly big enough to see with the naked eye. There are 2 different quantum states of current flowing through these coils, corresponding to 0 and 1, or the superpositions of them.

These coils interact with each other, nearby ones talk to each other and generate a state called an entangled state which is an essential state in Quantum computing. The way qubits interact are completely programmable, so we can send electrical signals to these qubits, and tweak them according to our requirements. This whole chip is placed in a refrigerator with a temperature close to absolute zero. This way superconductivity occurs which makes it to briefly behave as qubits.

Following is the explanation given according to ‘Kurzgesagt — In a Nutshell’, a YouTube channel.

We know a bit is either a 0 or 1. Now, 4 bits mean 0000 and so on. In a qubit, 4 classical bits can be in one of the 2^4 different configurations at once. That is 16 possible combinations out of which we can use just one. 4 qubits in position can be in all those 16 combinations at once.

This grows exponentially with each extra qubit. 20 qubits can hence store a million values in parallel. As seen, these entangled states interact with each other instantly. Hence while measuring one entangled qubit, we can directly deduce the property of its partners.

A normal logic gate gets a simple set of inputs and produces one definite output. A quantum gate manipulates an input of superpositions, rotates probabilities, and produces another set of superpositions as its output.

Hence a quantum computer sets up some qubits, applies quantum gates to entangle them, and manipulates probabilities. Now it finally measures the outcome, collapsing superpositions to an actual sequence of 0s and 1s. This is how we get the entire set of calculations performed at the same time.

What is a Grover Operator?

We now know that while taking one entangled qubit, it is possible to easily deduce properties for all the partners. Grover algorithm works because of these quantum particles being entangled. Since one entangled qubit is able to vouch for the partners, it iterates until it finds the solution with higher degrees of confidence.

What can they do?

As of now, quantum computing hasn’t been implemented in real-life situations just because the world right now doesn’t have such an infrastructure.

Assuming they are efficient and ready to be used. We can make use of it in the following ways: 1) Self-driving cars are picking up pace. Quantum computers can be used on these cars by calculating all possible outcomes on the road. Apart from sensors to reduce accidents, roads consist of traffic signals. A Quantum computer will be able to go through all the possibilities of how traffic signals

function, the time interval, traffic, everything, and feed these self-driving cars with the single best outcome accordingly. Hence, what would result is nothing but a seamless commute with no hassles whatsoever. It’ll be the future as we see in movies.

2) If AI is able to construct a circuit board after having tried everything in the design architecture, this could result in promising AI-related applications.

Disadvantages

RSA encryption is the one that underpins the entire internet. It could breach it and hackers might steal top confidential information related to Health, Defence, personal information, and other sensitive data. At the same time, it could be helpful to achieve the most secure encryption, by identifying the best one amongst every possible encryption. This can be made by finding out the most secure wall to break all the viruses that could infect the internet. If such security is made, it would take a completely new virus to break it. But the chances are very minuscule.

Quantum computing has its share of benefits. However, this would take years to be put to use. Infrastructure and the amount of investment to make is humongous. After all, it could only be used when there are very reliable real-time use cases. It needs to be tested for many things. There is no doubt that Quantum Computing will play a big role in the future. However, with more sophisticated technology, comes more complex problems. The world will take years to be prepared for it.

References:

About the Author –

Vignesh is part of the GAVel team at GAVS. He is deeply passionate about technology and is a movie buff.

Business Intelligence Platform RESTful Web Service

Albert Alan

Restful API

RESTful Web Services are REST architecture based web services. Representational State Transfer (REST) is a style of software architecture for distributed systems such as the World Wide Web. In this architectural style, data and functionality is considered resources and are accessed using Uniform Resource Identifiers (URIs), typically links on the Web.

RESTful Web Service

REST has some advantages over SOAP (Simple Objects Access Protocol) but is similar in technology since it is also a function call via HTTP protocol. REST is easier to call from various platforms, transfers pure human-readable data in JSON or XML and is faster and saves resources.

In the basic idea of REST, an object is accessed via REST, not its methods. The state of the object can be changed by the REST access. The change is caused by the passed parameters. A frequent application is the connection of the SAP PI via the REST interface.

When to use Rest Services

  • You want to access BI platform repository objects or perform basic scheduling.
  • You want to use a programming language that is not supported by another BI platform SDK.
  • You want to extract all the query details and number of records per query for all the reports like Webi and Crystal, etc.
  • You want to extract folder path of all reports at once.

Process Flow

RESTful Web Service

RESTful Web Service Requests

To make a RESTful web service request, you need the following:

  • URL – The URL that hosts the RESTful web service.
  • Method – The type of HTTP method to use for sending the request, for example GET, PUT, POST, or DELETE.
  • Request header – The attributes that describe the request.
  • Request body – Additional information that is used to process the request.

Common RWS Error Messages

RESTful Web Service

Restful Web Service URIs Summary List

URLResponseComments
  /v1Service document that contains a link to the /infostore API.This is the root level of an infostore resource
  /v1/infostoreFeed contains all the objects in BOE system/v1/infostore
  /v1/infostore/ <object_id>Entry corresponding to the info object with SI_ID=./v1/infostore/99
      /v1/logon/longReturns the long form for logon, which contains the user and password authentication template.Used to logon to the BI system based on the authentication method.
  /v1/users/ <user_id>  XML feed of user details in BOE systemYou can Modify user using PUT method and DELETE user using DELETE method.
    /v1/usergroups/ <usergroup_id>    XML feed of user group details in BOE systemSupport GET and PUT and DELETE method. You can Modify user group using PUT method and DELETE user group using DELETE method.
  v1/folders/ <folder_id>XML feed displays the details of the folder, can be used to modify the details of the folder, and delete the folder.You modify the folder using PUT method and DELETE the folder using DELETE method
  /v1/publicationsXML feed of all publications created in BOE systemThis API supports GET method only.

Extended Workflow

 The workflow is as follows:

  • To Pass the Base URL

GET http:///localhost:6405/biprws/v1/users

  • To Pass the Headers

  • To Get the xml/json response

Automation of Rest Call

The Business Intelligence platform RESTful Web Service  (BI-REST-SDK) allows you to programmatically access the BI platform functionalities such as administration, security configuration and modification of the repository. In addition, to the Business Intelligence platform RESTful web service SDK, you can also use the SAP Crystal Reports RESTful Web Services  (CR REST SDK) and SAP Web Intelligence RESTful Web Services (WEBI REST SDK).

Implementation

An application has been designed and implemented using Java to automate the extraction of SQL query for all the webi reports from the server at once.

Tools used:

  • Postman (Third party application)
  • Eclipse IDE

The structure of the application is as below:

The application file comprises of the required java jar files, java class files, java properties files and logs. Java class files (SqlExtract) are the source code and will be compiled and executed using command prompt as:

Step 1

  • Javac -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command compiles the java code.

Step 2

  • Java -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command runs the compiled java file.

The java properties file (log4j) is used to set the configurations for the java code to run. Also, the path for the log file can be set in the properties file.

RESTful Web Service

The logs (SqlExtractLogger) consist of the required output file with all the extracted query for the webi reports along with the data source name, type and the row count for each query in the respective folder in the path set by the user in properties file.

RESTful Web Service

The application is standalone and can run in any windows platform or server which has java JRE (version greater than 1.6 – preferred) installed in it.

Note: All the above steps required to execute the application are consolidated in the (steps) file.

Conclusion

SAP BO provides Restful web service to traverse through its repository, to fetch structural info and to modify the metadata structure based on the user requirements. When integrated with programming languages like python, java, etc., extends the scope to a greater extent, allowing the user to automate the workflows and to solve the backtracking problems.

Handling Restful web service needs expertise in server administration and programming as changes made to the metadata are irreversible.

References

About the Author –

Alan is a SAP Business Intelligence consultant with a critical thinking and an analytical mind. He believes in ‘The more extensive a man’s knowledge of what has been done, the greater will be his power of knowing what to do’.