Can AI transform employee experience?

Innovation and emergence of AI

Continuous innovation and technological disruption have taken the market by storm, forcing the digital workforce to experience massive transformation. This indicates that both the organization and its employee base reworked their mode of operation to achieve a digital revolution. Research by Deloitte reports that the change observed due to commencement of artificial intelligence (AI) is diverse, encompassing organizations, productivity, workforce as well as the HR. The amount of impact AI has on employee experience is immense, which can positively benefit any organization.

AI encompasses all industry verticals

AI has touched almost every industry vertical, starting from healthcare to advertising, finance, legal, transportation and education. Apart from personalized AI experience, it is also adopted worldwide in the professional arena as well. AI-powered chatbots in workplace can aid HR solutions, creating a seamless employee experience. Secondary research confirms the fact that, 92% of HR leaders believe that the future of enhanced level of employee service will include chatbots. AI-driven chatbots render a certain amount of comfort to the individuals who leverage it, making it popular. Research suggests that by 2020, 75% of the employees will depend upon conversational AI technology.

AI intervenes in HR related issues

AI indeed plays a crucial role in enabling the HR of a company manage liable workforce on 24/7 services. It ensures competent guidance to the employees and allow their creativity and innovation to grow along with the company. One of the classic examples of usage of AI in HR automation is, a conversational solution such as Amelia. This proved worthy since it could resolve issues concerning employees related to payroll, facilities, career development etc. independently. During situations like sexual misconduct, an employee feels much more comfortable reporting such incidents through a chatbot rather than a human being. Even policies regarding maternity leave can be accessed more spontaneously through AI-powered chatbots.

With an extensive knowledge of the organization, the cognitive assistant becomes a one-stop hub for all HR needs. Certain framework, complex processes and policies are part of the HR function which although unpleasant, are still crucial. Here, automatic communication of such information becomes seamless and hassle-free. Technologies like AI and chatbots have revolutionized the employee life cycle from recruiting to on-boarding, creating a growth path for employees and upskilling.

Large companies are proactively utilizing AI in pre-screening of candidates, before actually inviting them for face to face interviews. Pymetrics and Montage are tools that assess candidates based on cognitive and emotional features while avoiding biases concerning gender, race and socio-economic status. When trained using appropriate data, AI-powered chatbots prove useful for diversity hiring. A combination of technology, analytics and intuitive tools increases the proficiency of the HR function making it more reliable and effective. This not only improves effectiveness and efficiency but also enables an organization to become a modern employer with Millennial workers.

Impact of training powered by AI

Big business houses who strive to hire and retain top talents in the market realize the importance of positive employee experience and drive exponential performance improvements. Companies are realizing that it is more of competitive advantage in attracting and retaining talent and more so, this transforming employee experience is not an HR initiative, rather it is a business initiative. Be it on-the-job training or transfer of skills, AI plays a crucial role in easing the training process. The innovative tools AI features helps the workforce with both up-skilling and re-skilling initiatives. The integration happened to leverage augmentation enhancing workforce efficiency rather than replacing the same. Such integration into learning and development process can improve the overall learning outcome of the employees. The whole idea was to automate the mundane tasks freeing individuals for enjoying their creative space which will renovate employee experience.

Tracking health and wellbeing of workforce

Since technology is encompasses an individual’s personal life, employees these days are expecting the same amount of initiative taken by the company in improving work life. AI can help organizations track sleeping and exercising habits of workforce which can be intrusive, but it will help both the company and its workforce to maintain work life balance to stay productive and improve with time.

Observing the transformation of employee experience

According to a research conducted by Deloitte, 80% of the respondents are aware of the importance of employee experience. 92% respondents believe that AI will transform the employee experience through retrieval of information, through chatbots driven by AI. IT helpdesk chatbot is also a contribution of AI that can assist employees with basic troubleshooting and updates on tickets.

With the increase in the number of Millennial workforces across industries, chatbots are rapidly gaining popularity. The young employees prefer chatbots over humans, who will need time to process a request in order to search for the correct information. Infact, secondary research confirms the fact that 30% of the workforce in today’s world prefer a search engine, empowered with the information required to complete daily job requirement. Personalized compensation packages based on employee’s job role, efficiency and productivity creates a trusted reward program that incentivize employees to work harder.

Creating a new culture

For a company with Millennial or Gen Z digital natives, an up-to-date and intuitive technology is required for online training and development and a change in orientation, by changing the focus on outcome rather than tasks for a more dynamic work experience. There is a growing requirement to look beyond an employee’s function, considering their total experience with the organization. Starting from hiring, onboarding, company’s cultural orientation, management, team dynamics – all play a pertinent role in shaping an employee’s experience. Understanding this aspect will help create a stronger employee culture. Hence, it will be able to meet the employee expectation of productive, engaging and enjoyable work experience.

Hindrance towards blending AI with the existing order

There are certain barriers that an organization faces while empowering its workforce with AI technology.

  • Often HR team members are skeptical in adopting this transformation due to lack of required skill
  • Fear of losing jobs due to automation
  • Lack of change management required to adopt new ways of sourcing and engaging employees
  • Many companies have not yet prioritized employee experience
  • Absence of cultural orientation to the revolution, AI has initiated
  • Lack of communication between employees and organization regarding the digital transformation

Advantages confirming a transformational employee experience

  • It has become a way of understanding the behavior change of both employee and customer
  • The transforming employee experience can help gain competitive advantage in attracting and retaining employee
  • In order to design a technological roadmap, HR, IT and digital transformation all work in conglomeration
  • Substantial technological advance like creation of conversational interfaces
  • Transforming employee-employer relationship through automation
  • Emergence of massive HR job roles
  • Faster resolution and improved incident management creates a positive impact on the employees
  • A Business Intelligence (BI) bot powered by AI can empower employees to leverage data to gather insights on a regular basis
  • An AI-driven video communication strategy can improve video insight and discovery


Organizations are readily embracing change along with the technological evolution. Among the focus points such as branding, strategic hiring, analysis of data, organizations strongly believe that employee experience is the future of a successful company. With massive focus on digitization and a mammoth impact due to adaption of technology, both employees and organizations need to learn and be prepared to acclimatize the change innovation is bringing in. The moto with which companies proactively instilled AI technology is to focus on strategic value creation in the form of employee engagement, performance improvement and employee experience.

Massive Parallel Processing (MPP)

by Dharmeswaran P

Big data is a term that describes the large volume of data that inundates businesses on a day-to-day basis. Algorithms that work well on “small” datasets crumble when the size of the data extends into terabytes. Organizations large and small are forced to grapple with problems of big data, which challenge the existing tenets of data science and computing technologies. The importance of big data doesn’t revolve around how much data you have, but what you do with it.

In early 2000s, the big data storage problems were solved by companies like Teradata that offer a unified architecture able to store petabytes of data. Teradata can seamlessly distribute datasets on multiple Access Module Processors (AMPs) and facilitate faster analytics.

Teradata Database is a highly scalable RDBMS produced by Teradata Corporation (TDC). It is widely used to manage large data warehousing operations with Massive Parallel Processing (MPP). It acts as a single data store that accepts many concurrent requests and complex Online Analytical Processing (OLAP) from multiple client applications.

Teradata has patented software Parallel Database Extension (PDE) which is installed on hardware component, this PDE divides the processor of a system into multiple virtual software processors where each virtual processor acts as an individual processor and it can perform all tasks independently. In similar fashion, the hardware disk component of Teradata is also divided into multiple virtual disks corresponding to each virtual processor. Hence, Teradata is called shared-nothing architecture.

Teradata uses parallel processing, and the most important aspect of this is to spread the rows of a table equally among the AMPs who read and write data. It uses a hashing algorithm to determine which AMP is responsible for a data row’s storage and retrieval. It will generate 32-bit hash value whenever the same data value is passed into it.     

Tools and Utilities

Teradata Studio – Client based graphical interface for performing Database administration and Query development.

Teradata Parallel Transporter (TPT)– Parallel and scalable data-loading and unloading utility to/from external sources.

Viewpoint – Provides Teradata customers with a Single Operational View (SOV) – System management and monitoring across the enterprise for both administrators and business users.

Row-level security (RLS)– Allows restricting data access on a row-by-row basis in accordance with site security policies.

Workload Management– A workload is a class of database requests with common traits whose access to the database can be managed with a set of rules. Workload management is the act of managing Teradata Database workload performance by monitoring system activity and acting when pre-defined limits are reached.

Hadoop and Cloud connector

Teradata Connector for Hadoop (TDCH) – Bi-directional data movement utility between Hadoop and Teradata which runs as a MapReduce application inside the Hadoop cluster.

QueryGrid – Teradata-Hadoop connector provides a SQL interface for transferring data between Teradata Database and remote Hadoop hosts.

IntelliCloud– Secure cloud offering that provides data and analytic software as a service (SaaS). It enables an enterprise to focus on data warehousing and analytic workloads and rely on Teradata for the setup, management, maintenance, and support of the software and infrastructure – either in Teradata data centers or using public cloud from Amazon Web Services (AWS) and Microsoft Azure.

Use case in Automobile Industry

One of the famous automobile company used Teradata for product development process. As part of this initiative, employees volunteer to have their vehicle data collected via the OpenXC interface and stored it on Teradata for near real time analysis.

  • A Controller Area Network (CAN) interface is installed in each participating vehicle
  • Vehicle data is streamed to the phone via Bluetooth
  • Data is collected on the phone using mobile app
  • Every day, the app transmits huge amount (~ 1TB) of data to Teradata server through participant’s home broadband Internet
  • The size of database is ~200TB. The data is cleaned and standardized for easy, in depth analysis. Once data is transformed, target database size is ~500GB per day for reporting application to visualize
  • Data is then available for Engineers to transform and analyze in different dimensions for reporting
  • Using this analytics engine, they have analyzed fuel economy, On-Road Conditions, performance of car equipment, failures, battery charge limits …etc.,

Summary The fundamental principle behind Teradata’s ability to ingest petabyte of data and process it at a high speed. It provides the most scalable, flexible, cloud-capable Enterprise Data Warehouse in today’s market. The world’s first parallel database designed to analyze data rather than simply store it. And, it continues in our

Let’s talk Singleton

by Bargunan Somasundaram

Singleton pattern is one of the GOF (Gangs of Fiur) design pattern.  It’s one of the basic and most known creational design pattern, yet lot misunderstood in implementing it. Let’s delve deeper into what is Singleton Pattern is and how to create an effective singleton in Java. Since there exists multiple implementations for Singleton pattern, let’s see what are the merits of choosing one Singleton design over the other. 

What is Java Singleton Design Pattern? 

  • The Singleton Design Pattern ensures that the instance of a particular class is created only one throughout the Java Virtual Machine by providing a check on the initialization. 
  • It also provides a unique global access point to the object so that each subsequent call to the access point returns only that particular object. ie no new instances are created for subsequent calls. 
  • Singleton also guarantees control of a resource 

Why do we need Singleton? Or when to use Singleton? 

On the design perspective, some scenarios require only one instance of a class to be created.   

For example,  

  • Facade objects are often Singletons because only one Facade object is required.  
  • Creating multiple loggers in an application is a costly operation, so making the loggers singleton reduces the performance overhead 
  • One-time Configurations can be encapsulated into Singletons eg. Driver objects or database connection string. 
  • Caching play an important role in reducing the server calls, by sending the same result, so designing the Caching resources singleton, makes the same resource available for all the future calls. 
  • Other design pattern like Prototype, Abstract Factory, Façade and Builder also employs Singleton. 

How to implement Java Singleton? 

Steps to create a Singleton Class:  

There are multiple approaches in implementing the Singleton. One method has advantage over the other. But let’s see the steps common to them in creating a Singleton Class. 

  1. Make the constructor of the class private, so that the instantiation outside the class can be restricted. 
  2. Create a private static variable of type of the same class, which will be the only instance of the class. 
  3. Write a public static method whose, return type will be of the same class and which will instantiate the class only once and return that instance. This method will be the global point of access to the outside world to access the Singleton instance of the class. 

Code Snippet Of a minimal Singleton Design 

public class FileLogger { 

       private static FileLogger logger

       // prevents instantiation from outside class 

       private FileLogger() { 


       // Global point of access 

       public static FileLogger getFileLogger() { 

              if (logger == null) { 

                     logger = new FileLogger(); 


              return logger


Types of Singleton in Java. 

There are multiple approaches in OOPS to design a class Singleton. Based on the need, following are the types of Singleton. 

  1. Eagerly initialized Singleton 
  2. Lazily initialized Singleton 
  3. Static block initialized Singleton 
  4. Bill pugh or On Demand Holder Singleton 
  5. Thread safe Singleton 
  6. Serialization safe Singleton 
  7. Reflection safe Singleton 
  8. Clone safe Singleton 
  9. Enum Singleton 
  10. Weak Malleable Singleton 
  11. Soft Malleable Singleton 

Eagerly initialized Singleton 

In case of eagerly initialized singleton, the instance of the class is created at the time of class loading.  This is the simplest one but has lot of disadvantages. 


  • Thread safe. 
  • Simple to create. 


  • The instance is created even before the client could use it. 
  • Prone to Reflection i.e multiple instances can be created by using Reflection. 
  • Prone to Cloning i.e when cloned returns new instances. 

Code Snippet.

public class EagerlyInitializedSingleton { 

       private static Optional<EagerlyInitializedSingleton> INSTANCE = Optional 

                     .ofNullable(new EagerlyInitializedSingleton()); 

       private EagerlyInitializedSingleton() { 


       public static Optional<EagerlyInitializedSingleton> getInstance() { 

              return INSTANCE


Lazily initialized Singleton 

As the name describes, the instance of the class is created only when required or the call is made to the getInstance() Method. 


  • Lazy initialization with good performance. 


  • Not Thread Safe. In a multi-threaded environment the singleton is destroyed. 
  • Prone to Reflection i.e multiple instances can be created by using Reflection. 
  • Prone to Cloning i.e when cloned returns new instances. 

Code Snippet.

public class LazilyInitializedSingleton { 

          private static Optional<LazilyInitializedSingleton> INSTANCE = Optional.empty(); 

       private LazilyInitializedSingleton() { 


       public static Optional<LazilyInitializedSingleton> getInstance() { 

              if (!INSTANCE.isPresent()) { 

                 INSTANCE = Optional.ofNullable(new LazilyInitializedSingleton()); 


              return INSTANCE


 Static block initialized Singleton 

Here the instance is initialized in the static block. 


  • A runtime Exception can be thrown in case of issues in creation of the Singleton instance. 
  • Thread Safe. 


  • It’s similar to eagerly initialized Singleton approach; hence the instance is created even before the client could use it. 
  • Prone to Reflection i.e multiple instances can be created by using Reflection. 
  • Prone to Cloning i.e when cloned returns new instances 

Code Snippet.

public class StaticBlockSingleton { 

       private static StaticBlockSingleton INSTANCE

       private StaticBlockSingleton() { 


       static { 

              try { 

                     INSTANCE = new StaticBlockSingleton(); 

              } catch (Exception e) { 

                     throw new RuntimeException(“Exception in creating singleton instance”); 



       public static StaticBlockSingleton getInstance() { 

              return INSTANCE


Bill pugh or On Demand Holder Singleton 

Bill Pugh’s “On demand Holder” approach of creating Singleton class is widely used for its ease in solving Java memory model issues and for lazy loading. 

This approach uses static block but in a different way by employing a static inner class which will act as an on-demand holder.   


  • Lazy initialization with good performance 
  • Easy to understand and implement. 
  • Thread Safe. 


  • Prone to Reflection i.e multiple instances can be created by using Reflection. 
  • Prone to Cloning i.e when cloned returns new instances 

Code Snippet.

 public class BillPughSingleton { 

       private BillPughSingleton() { 


       private static class SingletonHolder { 

        private static final BillPughSingleton INSTANCE = new BillPughSingleton(); 


       public static BillPughSingleton getInstance() { 

              return SingletonHolder.INSTANCE


Thread safe Singleton 

Since JVM being a multi-threaded platform, it’s a good practice to design the singleton for concurrency safety.   

To make the lazily initialized singleton thread safe, the easier or probably the naïve way is to make the getInstance() method  synchronized. 

Code Snippet.

public class SingleThreadSingleton { 

       private static Optional<SingleThreadSingleton> INSTANCE

       private SingleThreadSingleton() { 


       public static synchronized Optional<SingleThreadSingleton> getInstance() { 

              if (!INSTANCE.isPresent()) { 

                     INSTANCE = Optional.ofNullable(new SingleThreadSingleton()); 


              return INSTANCE



Lazy initialization 

  • Thread Safe. 


  • Making the getInstanc() methods synchronized slows down the performance, since the sequential access is given for threads to access the Singleton instance. In a multi-threaded environment, it would have a huge impact on the performance. 
  • Prone to Reflection i.e multiple instances can be created by using Reflection. 

To reduce that performance issue every time, there is another approach called “Double-Checked locking”. 

  Code Snippet.

public class MultiThreadedSingleton { 

      private static Optional<MultiThreadedSingleton> INSTANCE = Optional.empty(); 

       private MultiThreadedSingleton() { 


       public static Optional<MultiThreadedSingleton> getInstance() { 

              if (!INSTANCE.isPresent()) { 

                     synchronized (INSTANCE) { 

                           if (!INSTANCE.isPresent()) 

                     INSTANCE = Optional.ofNullable(new MultiThreadedSingleton()); 



              return INSTANCE


Here instead of synchronizing the whole method, the critical section is synchronized. Also, an additional null check is made inside the if conditional to make sure, the singleton is concurrent safe for access. 


  • Concurrent safe. 
  • Less overhead in multi-threaded environment. 
  • Lazy instantiation of the instance. 


  • Prone to Reflection i.e multiple instances can be created by using Reflection. 
  • Prone to Cloning i.e when cloned returns new instances 

Serialization safe Singleton 

 At times, the Singleton must be serialized i.e its state has to be saved / persisted and retrived back or has to be sent over the network. To achieve this, the  Singleton Class must implement the marker interface Serializable.

This serializable Singleton has a problem, when the reverse process of deserialization is applied. The deserialization process creats new instances of the Singleton class.

The destruction of Singleton for the Serialized class can be prevented by implementing the readResolve method, a class can directly control the types and instances of its own instances being deserialized. The method is defined as follows:

            ANY-ACCESS-MODIFIER Object readResolve()

                         throws ObjectStreamException;

The readResolve method would be implemented to determine if that SerializedSingleton was already defined and substitute the preexisting equivalent SerializedSingleton  object to maintain the identity constraint. In this way the uniqueness of SerializedSingleton  objects can be maintained across serialization.

Code snippet of Serialization Safe Singleton.  

public class SerializedSingleton implements Serializable { 

       private static final long serialVersionUID = 1L; 

       private SerializedSingleton() { 


       private static class SingletonHolder { 

          private static final SerializedSingleton INSTANCE = new SerializedSingleton(); 


       public static SerializedSingleton getInstance() { 

              return SingletonHolder.INSTANCE


       public Object readResolve() { 

              return getInstance(); 


Reflection safe Singleton 

Reflection API in Java is commonly used by programs which require the ability to examine or modify the runtime behavior of applications running in the Java virtual machine. Aving described the power of Reflection API, it can break the Singleton class by changing the access level of constructor from private to public.


Now as many as possible new instances can be created from the Singleton Class.

Code Snippet.

public class Reflectiontest {

       public static void main(String[] args) {

              EagerlyInitializedSingleton rSingleton1 = EagerlyInitializedSingleton.getInstance().get();

              try {

                     Class clazz = Class.forName(“com.pattern.creational.singleton.EagerlyInitializedSingleton”);

                     Constructor<?> constructor = clazz.getDeclaredConstructor();


                     EagerlyInitializedSingleton rSingleton2 = (EagerlyInitializedSingleton) constructor.newInstance();

       System.out.println(“First Instance Hashcode ==>” + rSingleton1.hashCode());

       System.out.println(“Second Instance Hashcode==>” + rSingleton2.hashCode());

              } catch (Exception e) {




Now the hashcodes are different, implying that the singleton is no more Sigleton.

First Instance Hashcode ==>366712642

Second Instance Hashcode==>1829164700

There are two ways in which the Reflection API breaking the Singleton can be prevented.

  1. Check instantiation in the constructor.
    1. Using enum to design Singleton.

Code Snippet.

private EagerlyInitializedSingleton() {

              if (!INSTANCE.isPresent()) {

               throw new IllegalStateException(“Singleton already constructed”);



But again, this method is nt totally fool proof, since  if someone tries to mess around with reflection to access private members, they may be able to set the field to null themselves.

The fool proof way against Reflection is to use enum to create Singleton.

Code Snippet.

public enum EnumSingleton {



To be continued…

The Importance of Performance Baselining for Digital Experience Management

by Sri Chaganty

Chief Technology Officer

Digital transformation is the integration of digital technology into all areas of a business, resulting in fundamental changes in how a business operates and the value they deliver to their customers.

According to research from IDC, two-thirds of the CEO’s of Global 2,000 companies will shift their focus from traditional, offline strategies to more modern digital strategies to improve the customer experience before the end of 2019 – with 34% of companies believing they’ll fully adopt digital transformation within 12 months or less.

End User Experience or Customer Experience is key to Digital Experience. If your business depends on mission-critical web or legacy applications, then monitoring how your end users interact with your applications is critical. The end users’ experience after pressing the ENTER key or clicking Submit might decide the bottom line of your enterprise.

Most monitoring solutions try to infer the end-user experience based on resource utilization.  However, resource utilization cannot provide meaningful results on how the end-user is experiencing an interaction with an application.  The true measurement of end-user experience is availability and response time of the application, end-to-end and hop-by-hop.

The responsiveness of the application determines the end user’s experience.  In order to understand the end user’s experience, contextual intelligence on how the application is responding based on the time of the day, the day of the week, the week of the month and the month of the year must be measured.  Baselining requires capturing these metrics across a time dimension.  The base line of response time of an application at regular intervals provides the ability to ensure that the application is working as designed. It is more than a single report detailing the health of the application at a certain point in time.

“Dynamic baselining” is a technique to compare real response times against historical averages. Dynamic baselining is an effective technique to provide meaningful insight into service anomalies without requiring the impossible task of setting absolute thresholds for every transaction.

A robust user experience solution will also include application and system errors that have a significant impact on the ability of the user to complete a task. Since the user experience is often impacted by the performance of the user’s device, metrics about desktop/laptop performance are required for adequate root-cause analysis.

For example, when you collect response time within the Exchange environment over a period of time, with data reflecting periods of low, average, and peak usage, you can make a subjective determination of what is acceptable performance for your system. That determination is your baseline, which you can then use to detect bottlenecks and to watch for long-term changes in usage patterns that require Ops to balance infrastructure capacity against demand to achieve the intended performance.

When you need to troubleshoot system problems, the response time baseline gives you information about the behavior of system resources at the time the problem occurred, which is useful in discovering its cause. When determining your baseline, it is important to know the types of work that are being done and the days and times when that work is done. This provides the association of the work performed with the resource usage to determine whether performance during those intervals is acceptable.

Response time baselining helps you to understand not only resource utilization issues but also availability and responsiveness of services on which the application flow is dependent upon.  For example, if your Active Directory is not responding in an optimal way, the end-user experiences unintended latencies with the application’s performance.   

By following the baseline process, you can obtain the following information:

  • What is the real experience of the user when using any application?
  • What is “normal” behavior?
  • Is “normal” meeting service levels that drive productivity?
  • Is “normal” optimal?
  • Are deterministic answers available?
    • Time to close a ticket, Root cause for outage, Predictive warnings, etc.
  • Who is using what, when and how much?
  • What is the experience of each individual user and a group of users?
  • Dependencies on infrastructure
  • Real-time interaction with infrastructure
  • Gain valuable information on the health of the hardware and software that is part of the application service delivery chain
  • Determine resource utilization
  • Make accurate decisions about alarm thresholds

Response time baselining empowers you to provide guaranteed service levels to your end users for every business critical application which in turns helps the bottom-line of the business.

About the Author:

Sri Chaganty is a Serial Entrepreneur with over 30 years’ experience delivering creative, client-centric, value-driven solutions for bootstrapped and venture-backed startups.

AIOps – GAVS’ Vision

By Chandra Mouleswaran Sundaram

Head – Infra Services

When we experienced Windows 3.0 – the matured GUI version of Operating system three decades ago, it was a defining moment for the IT world.  Five years later, when Citrix released Win Frame – a software that enabled running desktop applications in servers using browser, it was another WOW moment for the IT industry. Then we saw another great moment in 1999 through VMware workstation which allowed us to run multiple instances of OS on a single hardware. Then came the smartphones, cloud, Software Defined Data Centers, Big Data and now Artificial Intelligence. We are at the threshold of unleashing the power of the artificial intelligence to help humans.

Are machines better than humans? Humans win on the ‘intelligence quotient’ over machines, but when it comes to doing repeat tasks, machines score over humans by doing those repeat tasks error free any time, every time. So the Artificial intelligent programs combine the accuracy of machines and intelligence of humans. We at GAVS have been nurturing Artificial Intelligence for the past 5 years and we have a vision for it.

Artificial Intelligence driven IT operations (AIOps) is performing IT operations through self-learning and self-correcting systems. It deploys Machine Learning techniques to understand the ever changing IT environment, Artificial Intelligence to detect abnormalities and intelligent automation to remediate abnormalities before it impacts.

One of the key characteristics of AIOps is ’self learning’. The platform should be capable of understanding the physical and logical relationship between assets installed in the environment and the behavior of those assets through ’self learning’. It should not be depending on CMDB (Configuration Management Data Base) and / or ADDM (Application Discovery and Dependency Mapping).  It should not require any rules or configurations. It should learn from the events occurring in the environment and the way those events are occurring.

The above characteristic requires lots of data into the AIOps platform.  The quantity of data decides the accuracy of the correlation & prediction and the quality of the data decides the number of defects. AIOps platforms should not be depending on other tools to provide the data it requires to correlate and predict. It should know what kind and type of data it needs and also have the capability to generate those data.

Not all systems in an environment behave in the same way. Every system has their own characteristic and behavior. A general rule that is applied across all systems leads to false positives. AIOps platforms help move away from ‘rule based alerts’ to ‘pattern based alerts’. It should generate alerts based on historic performance & consumption taking into account the day, time, load, related services and devices. Then it should correlate based on the pattern by which the events, logs and alerts are generated in the environment. Further it should predict the performance and health of the applications and prescribe the remediation if any. Any rules or configuration or inputs are alien to a true AIOps platform.

Needless to say, the AIOps platform should be able to read the data of any format from any source, using any method. As far as possible it should be least intrusive and eco friendly. It should be able to receive the data at the speed of generation of those data at the source, Process the data at the speed of consumption by AI programs and display the results at the speed at which the users want to see.

Core of the AIOps platform typically has three components – Monitor, Analysis and automation. These are three different agents running on the server today and they handshake between them. GAVS sees this model metamorphose into a single component consisting of all above three components. This AI agent knows what it needs to check,  when to check and what action needs to be taken when it encounters an anomaly. However this agent is still outside the application. GAVS foresees that the agent would be in built into the application in the coming years. Every application will have the AI built into it and ensures defect free running. GAVS is collaborating with Indian Institute of Technology, Madras, India in building this capability through Reinforcement Learning.

The role of AIOps in resolution of incidents should not stop at just correlating the events, but go beyond it. While the correlation brings all the alerts associated with an incident, it does not provide further insights to move forward in the right direction for the administrators. One of the things it is expected to do is to display a graphical view of the journey of transactions showing the devices they touch through, with response times of the past, present and future between the devices end to end.  This helps the administrators to quickly narrow down to the devices and the path for analysis.

The ability to predict the behavior of even the smallest physical or logical component in a system, should lead to prediction of the behavior of the servers and then the applications under which those servers come and finally the business processes which are constructed using such applications. GAVS has a methodology to predict the Application Health Index (AHI), which is a function of performances of all the components of all servers, associated physically and logically with that application. The same methodology can be extended to predict the health of the business processes.

Artificial Intelligence will play a key role even before the application is rolled into production. It can very well be deployed during QA stage to predict how the application would behave when deployed in production.  The AI engine mines business requirements, test cases, unit test cases, defects logged & xRTM from knowledge repository to build predictive patterns of how application quality would result in deployments .

We let humans do better things than what a machine can do. As machines become more and more intelligent, humans become more and more wiser.

About the Author:

Chandra heads the IMS practice and IP at GAVS. He has around 25+ years of rich experience in IT Infrastructure Management, enterprise applications design & development and incubation of new products / services in various industries. He was with Genpact earlier, where he had served as CTO for about 7 years before becoming the leader of IP Creation. He had built a new virtual desktop service called LeanDeskSM for Genpact, from ‘concept’ to ‘commercialization’.

He has also created a patent for a mistake proofing application called ‘Advanced Command Interface”. He thinks ahead and his implementation of ‘disk based backup using SAN replication’ in one of his previous organizations as early as in 2005 is a proof of his visionary skills. Chandra is a graduate in Electronics and communication engineering.

Agile Testing: We Do It Wrong

By Katy Sherman

Director of Software Engineering at Premier Inc.

Last year I made a discovery. I realized I had misunderstood testing. I’d thought to test something meant to make sure it worked as expected. Matched the acceptance criteria. Did what it was supposed to be doing.

I was wrong

If you have ever been to IKEA furniture store you might have seen a robot performing a durability test on a chair. In the middle of the store, inside a glass box, the pressure is applied to the famous Poang model 20 times per minute to simulate a 250 lb person sitting in the chair 20 times per minute.

Impressive, right? I have actually owned this chair for over 10 years and it’s still good as new, except for a few scratches coming from my cats.

But if I tried to test this chair using my original definition of a test as “make sure it works”, I would probably just sit in it one time and then declare the test successful! After all, it worked for me.

Would it be enough?

See, I am an average size human, maybe on a smaller side, and I sit in chairs in a very graceful way. At least I hope so.

But what about other potential chair owners? Who might be bigger, smaller, who might want to lean, rock, jump, turn, and swing, while drinking tea and chewing on a sandwich? Those who will use the chair not one time, but for many years?

Now, the idea of testing is shifting from “checking that it works” to “finding creative ways to break it before the users do”.

Break the system before the users break it.

Software users will put your application to a real test, and they also come in different shapes. Some will try random workflows because they don’t know the product very well. The power users will want to dig deeper. And finally, there will be people who will actually want to break into your system with a malicious intent.

So, instead of our typical linear progression from coding to validation, we need to get into a cycle of building the application and breaking it, fixing it and breaking again. We should continue through the cycle of building and breaking until the product becomes resilient.

To break like a user, we need to think like a user.

The real test planning is not about capturing step by step instructions for a test case. It’s about creative imagination at  work trying to tackle the system from many different angles to find its weaknesses.

Invalid or unexpected input goes a long way in getting insight into what the system will do when the user makes a mistake or intentionally feeds it incorrect data. Testing on a large data set, or a small one, standalone or integrated into multiple systems will widen your horizons and knowledge about the product behavior. Trying to act as different users or user personas might lead to unexpected findings. Putting the system under stress as many users try to access it will help to find the system’s limitations.

And if you tested and didn’t find any defects, congratulations, your software is perfect!

Except, I haven’t seen one yet, so it’s probably too early to celebrate. It is possible that you are not looking in the right place.

You never know if the system is free of defects, or if they are hiding and you can’t find them.

Do not despair. The testing work is rewarding in itself, because it creates knowledge. The more experience you are getting about your product and different ways to break it, the more you learn about users and how they break it, the closer you are getting to making your application indestructible, like that plywood IKEA chair my cats like so much.

Build and share knowledge, learn from users, become better at breaking things.

And most importantly, do it in every sprint. That’s the heart of Quality in Agile. We are moving forward so fast, we have to be confident about our product. And the only way to be confident, is to do our best to test it, which means – to break it.

About the Author:

Katy is passionate about:

• Leadership and vision

• Innovation, technical excellence and highest quality standards

• Agility achieved through teamwork, Agile, Scrum, Kanban, TDD, CI/CD, DevOps and automation

• Breaking silos and promoting collaboration of Development, Testing and Operations under cross-functional umbrella of Software Engineering

• Diversity of personalities, experiences and opinions

Things Katy does to spread the word:

•  Speak at Technology conferences (including as an invited and key-note speaker)

•  Blog and participate in group discussions

•  Collaborate with schools, universities and clubs

•  Empower girls and women, help them learn about Technology and become engineers.

Impact of automation in IT incident management

Need for automation

In this era of automation, incident management plays a decisive role for an organization’s success. Automation enables a business to categorize, scrutinize and report a problem in no time so that standard business operations can be restored on time without cost implications. Automation creates an expectation of improvement for incident and change management while ensuring stability, speed and accuracy. It helps an organization to manage cost effectively and improve the quality of IT services.

Prerequisites of automation of incident management

The following are certain prerequisites that need to be considered before initiating automation process in incident management.

  • First it is essential to locate criticality and risk factors in a business before automating incident management.
  • In order for automation to classify and remediate, a large pool of valuable and credible data is required.
  • It should focus on continuous process improvement.
  • Services and business rules should be well defined.

Impact of automation in incident management

An efficient incident management decides the success of an organization. It quickens the process of identification, analysis and restoration, however with manual incident management the process does not remain effective. The following are the reasons why incident management needs to be automated:

  • Save time and money

Automation of incident management effectively diminishes manual effort, hence saving time. This enables the employees to focus on more imperative business functions and improve productivity. Since the approach taken is proactive, it reduces the risk of future expenditure, thus making it more cost-effective.

  • Improve communication

Through the process that involves detection, diagnosis, repair and recovery, communication improves a lot between people involved in the process. Automation of incident management makes communication flawless through bi-directional communication channels – such as email, phone, SMS, and messenger.

  • Centralize data access

Automation of incident management ensures a central dashboard that allows data access on a real-time basis. It makes the process simple and efficient for the entire team to access and control data throughout the process.

  • Planning and organizing

Automation helps in internal planning making incident management more effective. It also improves workflow monitoring that plays a crucial part in timely recovery and resolution. Starting from timely notification to automated corrective actions the entire process is integrated and organized.

  • Business impact

Outages and breach of security can result in loss of revenue, also negatively impacting customer perception and employee productivity. The other long-term impacts include damage of reputation and loss of consumers. Automation of incident management ensures faster restoration of service.

  • Proactive approach

The proactive approach manages incidents in a better and efficient way thus, potential incidents can be addressed on time. Through correlation and prioritizing of incoming alerts risk is eliminated. The automation process can reduce downtime by up to 90%.

  • Transparency and accountability

Automation of incident management increases visibility and transparency this creates a cohesive team environment.

  • Reduction in incident volume

The enhanced quality of incident management resulted in 30% reduction in incident volume. Here, application of filter in monitoring alerts enhances relevance of notification.


For automation to work successfully in incident management, IT systems should be connected. A set of well-coordinated processes, knowledgeable staff and effective stakeholder communication is essential to minimize business impact due to major incidents. Automated incident management made a huge difference to a renowned internet service provider. A WannaCry ransomware attack detected in their network was located through alarm and isolated and snoozed. It enabled the IT security team of the company to stay in control of their incident response (IR) activities and respond to such alerts with swift and effectiveness.

Chatbot and its importance in AI setup

Acceptance of chatbots

Artificial intelligence (AI) has not only touched several aspects of human life, but also landed in a consumer’s grip in the form of chatbot, encompassing his or her daily life with stability and reliability. Chatbots and virtual assistants are new tools designed to simplify the interaction between humans and computers. Chatbots have been accepted widely, due to usage of chatbot backstories while creating it. Thus, eliminating the user’s skepticism towards its adoption, making it more relatable in the real-life scenario.

Chatbot developers focus on creating unique backstories to give their finished product a distinct level of reliability and relatability. An effective chatbot development often depends upon creating a chatbot personality with unique traits, values, speech pattern and dialogue engagement so that a comfort zone can be woven around the concept of AI-driven chatbots. With massive evolution in the arena of service delivery, consumer expectation is growing which can be met only with innovative technologies, and chatbots are considered as one such innovation, that can render customer satisfaction.

What is a chatbot?

A chatbot is an AI software that can generate a seamless conversation with a user using Natural Language Processing (NLP) through messaging applications, telephone, websites, mobile apps or voice applications. A chatbot primarily executes two tasks, analyzing the user’s request and creating response to that request. Response can either be generic or predefined or it can also be based on information retrieved from data stored in enterprise systems. Apart from enhancing customer experience, chatbots improve the operational efficiency by reducing the cost of customer service.

Importance of chatbot, powered by AI

As we know that AI helps a system perform tasks like human, it provides a human touch to the conversation a chatbot strikes. AI ensures that the user’s query is understood, and an accurate response is triggered. Without AI, a chatbot will not be able to understand a unique query which will defeat the personalization of the conversation with an end user. The following are the importance of chatbot driven by AI.

Enhance customer service experience

AI-powered chatbots can keep customers engaged in lively and interactive conversation to solve complex queries. AI assists chatbot to learn from past conversation with customers to understand consumer’s choice and preferences. This approach also saves time, making the process more efficient and dependable. As per research, 83% of online shoppers need assistance to choose product that fits their budget and need. Chatbots can fill in missing information about the product a customer is interested in.

Proactive customer interaction

Chatbots can handle large number of requests at the same time. The ability to continuously learn and adapt while interacting with the customer in natural process has opened new prospects for enterprises. Especially with proactive customer interaction, a company can create an immense brand visibility in a competitive market.

Improve customer engagement

A humored and lucid flow of information can keep customers engaged with the brand. Through chatbots, the interaction can be more interactive and fast paced. In fact, research confirmed that by 2020, 85% of consumer interactions will be managed by companies through chatbots without human intervention.

Gauge product and service

Chatbots can be of great help in collecting feedback from the users which can be utilized to improve product and services. This kind of insight can improve business decisions.

Improving sales

Presence of a chatbot in an online store expands the overall experience of the user. It also allures the potential buyer to become an actual buyer. One cannot deny the fact that it is an overwhelming experience in interacting with a virtual store assistant who also suggests new products based on user search history and help the buyer make the purchase and the payment.

Cost saving solution

It definitely saves cost to implement a chatbot powered by AI rather than hire an individual and train him or her to operate as a support to customer service. Gartner predicted that by 2020, 40% of mobile interactions between consumer and store will be managed by smart agents.

Operational benefits

  • 24*7 customer support at low cost
  • Accurate and proactive assistance with empathy
  • Customer satisfaction
  • Accessibility to visitor’s data
  • Improved marketing and sales strategy
  • Mobile friendly
  • Updated service

Popular chatbots

Few of the popular chatbot driven by AI that deserve mention are:

  • IBM Watson chatbot
  • Amazon Lex chatbot
  • LivePerson chatbot
  • Dialogflow Chatbot
  • Bold360 chatbot
  • Microsoft Bot Framework
  • Spotify


Chatbots are dramatically impacting business. Apart from IT enterprises that readily adopted chatbot are e-commerce, retail, banking, leisure, travel and healthcare. According to market research, the chatbot market is rapidly expanding and it is predicted that by the end of 2020, 85% of the online interactions between customers and the online retail stores will be accomplished by chatbots. Research also suggests that 6 billion connected devices will need AI powered chatbot support by end of 2019.

GAVS’ ZIF is an AIOps based TechOps platform that enables proactive detection and remediation of incidents helping organizations drive towards a Zero Incident Enterprise™ . Visit to know more

AI and its impact on app competitiveness

AI in mobile tech world

This is the era of the fourth industrial revolution where technology without artificial intelligence (AI) is unimaginable. With the global acceptance of AI, it has encompassed all spheres, touching human life in several ways that also includes the mobile tech world. Research indicates that AI is rapidly gaining popularity, tech giants like Baidu and Google have already spent between $20 to $30 billion on AI to improve IT operations. Segments like healthcare, education, finance and IT ops are investing heavily in AI, however the prominence of AI in mobile tech world deserves a special mention.

Importance of AI in mobile app

The focus of AI is to develop intelligent machines that think, work and learn from experiences like humans. When AI joined hands with machine learning, the ability to analyze visual inputs such as gesture, object, and facial recognition was made seamless. For such AI mobile applications are as follows:

  • An iPhone app powered by AI can enhance perception, apply reason and even solve problems
  • Again, AI Robin is a voice assistant that can assist in reading text and other searched information along with voice GPS navigation
  • Google Smart App can simplify messaging
  • Hound is a voice search engine which makes information handy on a voice command

Deployment of AI in mobile app

AI uses the modest process of trial and error to learn about a solution when it comes to developing mobile app. Through this method, various attempts are made to locate the appropriate solution. Then that solution is stored for future usage, considering it as a reference point for similar circumstances. Along with the solution, the mobile app developers are also focusing on drawing appropriate inferences to enhance the interaction process. This helps users reach predefined solutions addressing various device problems.

Example of AI apps

The following are the existing apps that provides an enriched user experience:

  • Replika is an advanced AI app for iPhone that covers several aspects of a user’s life. This app can have conversations with the user like a real person.
  • App Airpoly can identify three objects in a single second.
  • Companies like Amazon, Apple, Artificial Solutions, Google, IBM, IPsoft, Microsoft and Satisfi are depending upon virtual agents like chat bots or voice managers to cater to the customer support need.
  • SMACC simplifies finance and account management making the process error free.
  • Cortana can assess relevant information, sort them and deliver services efficiently like scheduling meetings, sending emails, tracking events, sharing updates and reminders.
  • Hound can recognize voice command to search songs and videos or weather. It can also set alarm, search nearby restaurant or book an Uber ride.
  • Personal assistant like Siri became popular with its voice interface in place. It assists in phone and text actions, can provide information about weather and currency, schedule events, set reminders and provides an engaging experience.
  • My Starbucks Barista mobile app enabled customers to place their orders by mentioning it to the app.
  • Taco Bot launched by Taco Bell recommended personalized menu considering user-specific purchase trends.

Productivity app

Productivity apps powered by AI can be very useful in streamlining competences. For example, Google’s “G Suite” and Microsoft Office 365 and Delve can auto generate response and search relevant information.

Hopper app

This app powered by AI is used for predictive analysis especially in tourism industry to predict price patterns so that customers can plan visit to their favorite destination efficiently.

Technologies empowering apps

In order to create apps empowered with AI, developers ensure they choose an appropriate platform and install features keeping the end user preferences in mind. The technologies that improve app performance and competitiveness include:

  1. Speech to text (STT) and text to speech (TTS) engine that converts voice to text message and vice versa.
  2. Tagging helps the app analyze users’ requirement.
  3. Noise reduction engine eliminates white noise improving voice command capacity.
  4. Voice biometrics and recognition works as an authentication for refining security.

Technologies and companies associated with it

Natural Language Recognition Speech Recognition Technology Machine Learning Platform Image or Emotion Recognition
Digital Reasoning Verint Amazon Nviso
LucidWorks Nuance Google Affectiva
Yseop Opentext Microsoft  

Impact of AI on app competitiveness

Innovation has led end users expect better performance from mobile apps. Retail giants like eBay and Amazon have already proved the worth of AI in mobile apps. AI-enabled apps engage its user and strategically secure the brand, enhancing productivity and helps reduce errors. The algorithms present will adjust the app and forms more meaningful and context-rich prospects to keep end-users engaged. AI-aided chatbots on mobile devices use standard messaging tools and voice-activated interfaces, this reduces data collection time and simplifies the task. Also, user specific personalization will help with mundane or repeatable tasks. It even has a great impact in healthcare industry where reliability, predictability, consistency, quality and patient safety has seen improvements with the usage of AI-enabled apps.

AI in app market based on geography

The following geographical areas indicate extensive impact of AI on mobile app:

  • North America
  • South America
  • Europe
  • Asia Pacific
  • Middle East and Africa 

AI driven opportunities

AI has opened a horizon of new opportunities through app competitiveness

  • Smart interaction through chatbots
  • Deep personalization through speech recognition
  • Special opinion through recommendation services
  • Intellectual answers through learning behavior pattern


We can conclude that AI has a dramatic impact on transformation and competitiveness of mobile app. As per market research, this competition is yet to increase by 2020 since more organizations globally are investing in AI for revenue improvements and cost reductions. The deployment rates among different industry verticals have surged exponentially over the fast few years.

Importance of AI in healthcare

Advent of AI in healthcare

Although Artificial intelligence (AI) is touching every sphere of our lives, especially in the healthcare domain, its impact is truly life changing. Secondary research reveals, jointly public and private sector investment in healthcare is expected to reach $ 6.6 billion by 2021, encompassing areas such as healthcare, clinical research, drug development and insurance. Although human physicians will not be replaced by AI doctors however, AI will definitely prove useful in simplifying clinical decision especially in the area of radiology. Through imitation of human cognitive functions, it enhances the sector’s availability of both, structured such as image, genetic and electrophysiological data and unstructured data such as clinical notes and medical journals and also analytical techniques through machine learning and natural language processing (NLP), respectively.

Acceptability of AI

In healthcare, AI is effective in areas of early detection, diagnosis, treatment, prediction and prognosis evaluation in diseases such as cancer, neurology and cardiology. Through application of big data analytics, one can gather relevant information from piles of healthcare data, such insight can assist in clinical decisions. Also, AI helps incorporate self-correcting abilities and facilitates auto updation of medical information in different healthcare devices. Its ability to reduce diagnostic and therapeutic errors, create real-time health risk alerts and predict health outcomes are gaining traction over time. NLP is particularly helping in translating text to machine-readable structured data, which can thereafter be analyzed by ML techniques.

Prevalence of AI devices in healthcare

It is a complex task to accurately diagnose diseases, wherein AI-based tools with its enhanced functionalities make the entire process seamless, ensuring precision. It improves patient’s interaction with healthcare providers, systems and services. The different AI devices that finds broad usage in medical applications are:

1.     The classical machine learning techniques

2.     Deep learning techniques

3.     NLP methods

Prerequisite of AI in healthcare

Healthcare applications need training on data generated from clinical activities, such as screening, diagnosis and treatment before installing AI systems. Clinical data includes data obtained from demographics, medical notes, electronic recordings from medical devices, physical examinations of patients and laboratory images. Such data holds value in diagnosis of disease or in detecting genetic abnormalities. This is particularly effective in diagnosis of gastric cancer or neural injury.

Impact of AI technologies in diagnosis

  • IBM Watson is a reliable AI system to diagnose cancer through a double-blinded validation study.
  • Google’s DeepMind Health can combine machine learning with neuro science to build an algorithm with neural network that can detect medical conditions faster.
  • AI in healthcare can help restore the control of movement in patients with quadriplegia through spinal motor neurons to regulate upper-limb prostheses.
  • AI can help diagnose heart disease through cardiac image and create a remedy through automated editable ventricle segmentations.
  • For chronically ill patients, disease management and care plans can be approached in a comprehensive manner.

Prominence of AI in healthcare

AI and robotics are encompassing all aspects of our healthcare ecosystem. It is efficient, quick and cost effective at the same time. AI and Internet of Medical Things (IoMT) has already created a huge impact by encouraging individuals with a healthy lifestyle and proactive health management. The following are the importance of AI in healthcare.

  1. In southeast England, patients were given AI-powered, Wi-Fi-enabled armband that can monitor respiratory rate, oxygen levels, pulse, blood pressure and body temperature of patients. Also, readmission of patients costed US hospitals $40 billion annually so, Grady Hospital, the largest public hospital in Atlanta reduced readmission rates by 31% over a period of two years by adopting AI tools.
  2. Research suggests that with deployment of AI in healthcare, home visits of medical practitioners reduced by 22% and long-term completion of treatment by patients increased by 96%. AI driven systems constantly monitor and analyze warning signs alerting both, patients and professionals before healthcare is needed.
  3. According to World Health Organization, in 60% cases, the health of an individual and their lifestyle are correlated so, AI based systems can now trigger reminder and also generate alert based on vital signs of patients and remind them to take prescribed medicine.
  4. Risk can be mitigated effectively through alliance. Synaptic Healthcare Alliance, a collaborative pilot program between Aetna, Ascension, Humana and Optum use blockchain method to manage data and utilize AI efficiently. With the tone of voice and background noise, AI in healthcare can detect cardiac arrest with 93% success rate. According to UK’s healthcare system, AI can prevent thousands of cancer related deaths by 2033.
  5. AI collaborates with trained professionals to diagnose competently. For example, University of California at San Diego depends on AI to successfully diagnose childhood diseases.
  6. AI reduces the need for biopsy as it has the ability to translate mammograms 30 times faster with 99% accuracy.
  7. AI plays an important role in drug research and discovery.
  8. AI technology used in surgery can reduce 21% of patient’s hospital stay due to minimal incision. Heart lander, a miniature robot is used in heart surgery with precision and competency.
  9. A virtual nursing assistant can save $ 20 billion annually and can interact with patients creating an effective care setting.
  • AI can also automate administrative task saving $ 18 billion for healthcare industry. For example, IBM’s Watson helped Cleveland Clinic’s physicians by analyzing thousands of medical papers using NLP to create treatment plans.
  • AI can improve next generation radiology tools that will no more rely on tissue samples. It will be able to analyze 3D scans 1000 times faster than human minds.


In spite of the potential benefit of AI in healthcare, there are certain challenges in adopting and implementing it. Till date, both healthcare professionals and patients haven’t been able to completely rely on algorithms to diagnose and plan treatment. However, slowly but steadily individuals are adopting AI tools that can accurately diagnose issues, analyze clinical reports and identify genetic information in a much faster pace that can save lives. It is worth mentioning here how GAVS Technologies empowered multiple hospitals and healthcare organizations around the world, with technology-led SMART solutions and delivery, that has significantly improved patient care. Visit to know more.