Virtual Extensible LAN (VXLAN)

by Chandrasekar Balasubramanian

Network overlays is a method which creates additional layer of network abstraction providing new application or benefits. VxLAN is a Layer-2 overlay scheme. It runs over the existing network infrastructure and provides a means to extend a layer-2 network.

Let us see why we need VxLAN in the first place. In a typical networking environment, IP phones, hosts and servers are connected to the layer-2 devices like switch in a network. The switch in turn connects to other switches or to layer-3 devices like routers. Each host or server has a unique Media Access Control (MAC) address which is used in Layer-2 communication between the switch and the connected devices.  The switches maintain a MAC address table which helps a host, or a server connected to the switch to communicate with the other devices connected to the switch. This is achieved through the functionality of switching in the switch.  A switch is a layer-2 device having multiple ports, which receives data, processes it and forwards to the destination device based on the MAC address.

VLANs provide a logical segmentation by department, team or application.  E.g. a VLAN for HR, another VLAN for finance, another VLAN for engineering.  Hosts, servers and other devices connected to the switch become members of the VLAN by assignment of the ports (to which the hosts/servers are connected) to a VLAN.  Traffic originating in one host or server connected to a VLAN is only forwarded to the devices connected in the VLAN because it’s a single broadcast domain within the VLAN. And another VLAN has a different broadcast domain.  If you need traffic to be forwarded to another device which is connected to another VLAN, then either a layer-3 switch or a router needs to be used.

Challenges due to Server virtualization and Containers:

Due to server virtualization, multiple Virtual Machines (VM) residing on the server are assigned unique Media Access Control (MAC) addresses by the Virtualized server. By default, the server has its own MAC address. Likewise, a Docker also assigns unique MAC addresses to multiple containers. As you know the MAC address is a unique Layer-2 address assigned to each Ethernet device. In this case all the VMs are treated as Ethernet devices in addition to the virtualized server.  Likewise, all containers are treated as Ethernet devices in addition to the docker.  To recollect, the switch is the Layer-2 networking device which connects the servers and hosts to the network.  Switch maintains a MAC address table and a VLAN table.  We use Virtual trunk protocol (VTP) and trunk links to extend VLAN on one switch to all other switches. Dot1Q encapsulation is needed for the ethernet frames to be identified with VLANs.  I.e., VLAN information is stored in the header of the frame using Dot1Q encapsulation.  The encapsulation will add 4 bytes of tag to the frame. Within this 16-bit VLAN Id is stored in the frame. This will help for the receiving switch to identify for which VLANs the frame belongs to.

We require larger MAC address table due to hundreds of thousands of VMs.    We also require larger MAC address table due to hundreds of thousands of containers as well.  VMs in a data center are grouped according to the VLANs to which they belong to. Likewise, containers are grouped according to the VLANs to which they belong to.  One might need thousands of VLANs to take care to segment the traffic belonging to thousands of VMs into multiple VLANs.   Similarly, we need thousands of VLANs to segment the traffic belonging to thousands of containers into multiple VLANs. But there is a VLAN limit of 4094.  Clearly a better solution is needed more than what VLAN provides in this scaled environment. This is where VxLAN comes in.

Challenges due to Multi-tenant environment in Cloud:

As you know multi-tenancy is an architecture in which a single instance of software runs on a server and serves multiple customers. Cloud service providers provide on demand elastic provisioning of resources to multiple tenants by using the same physical infrastructure maintained by Cloud service provider. Isolation of traffic for a tenant can either be done via layer-2 or layer-3 networks. In the case of Layer-2 network, we can have each tenant to have its own VLANs.  Since many tenants are serviced by a cloud service provider, the number of VLANs required will be more than maximum number of VLANs supported which is 4094. One thing that aggravates the VLAN limitation is that each tenant will require multiple VLANs.  This is again where VxLAN comes in.

Limitations of Spanning Tree Protocol (STP):

Layer-2 switches use spanning tree protocols to avoid loops due to duplicate paths. STP blocks the ports so that loops are avoided.   Some of the ports end up being unused though the cost for the ports is paid. Also, there is no way to build resiliency using STP. Multipathing is not there in STP. An important requirement for a virtualized environment using a layer-2 network is to scale the layer-2 network is to scale across the data center or between data centers.  Using STP in such cases will lead to large number of disabled links due to loop detection. Newer mechanisms like Transparent interconnection of lot of links (TRILL) used in VxLAN helps in alleviating this problem.

Top of the Rack (TOR) switch limitation: The top of the rack switches which connect to the servers need to also learn and maintain an address table about the MAC addresses of Virtual machines within the servers and containers within the servers.  In some large environments where are there are several thousands of VMs and containers, it can lead to overflow of the MAC address table maintained by the Top of the rack switches leading to issues like stoppage of learning and flooding of unknown destination frames.

Virtual eXtensible Local Area Network (VxLAN):

As you know, Network overlays is a method which creates additional layer of network abstraction providing new application or benefits. VxLAN is a Layer-2 overlay scheme. It runs over the existing network infrastructure and provides a means to extend a layer-2 network. It’s a layer-2 overlay scheme on a layer-3 network.  Each overlay is a VxLAN segment. Only VM or a network device or a container connected to the same VxLAN segment can communicate with each other. Each VxLAN segment is identified with a 24-bit segment ID.  Here, in the following diagram we can see that the Layer-2 Ethernet frame is sent as part of a Layer-3 packet. Outer MAC address is the MAC address of the packet carrying the Overlayed Layer-2 packet. Outer IP header and Outer UDP header are the IP and UDP header of the packet carrying the OVerlayed Layer-2 packet. VxLAN Header is the header which facilities to identify and segment based on VxLAN information.  Then we have the overlayed Ethernet frame followed by checksum.

VxLAN Header:

Each VxLAN segment is identified by a 24-bit segment ID. This is called VxLAN Network Identifier (VNI). This allows up to 16 Milliion VxLAN segments to co-exist within the same administrative domain. VxLAN hence overcoems the limitation of 4094 VLANs.

VxLAN Tunnel Endpoint (VTEP):

VTEPs does all the work related to VxLAN in terms of encapsulation and de-encapsulation which makes the VxLAN overlay of Layer-2 over Layer-3 work.

VTEP is the endpoint of the tunnel which is located within the hypervisor on the VM server which hosts the VM. Likewise in containerized environment the VTEP endpoint is located within the docker machine.

VxLAN Deployment:

VxLAN is deployed in either data centers or cloud environments where containers and/or virtual machines are used.

VxLAN Security: Layer-2 Over Layer-3 overlay mechanism used in VxLAN increases the attack surface. The Layer-3 attack on the tunnelled traffic can be secured by using IPSec which authenticates and also encrypts VxLAN Traffic. Layer-2 attacks can be mitigated using 802.1x authentication.

VxLAN Monitoring:

VxLAN monitoring and visibility is an open area where there are not much of tools and existing tools also needs to mature.  We at GAVS Technologies, believe that VxLAN is an important Technology that will be used more in future and we need to support it fully with a Monitoring tool. We also  can support Managd services of Network infrastructures involving VxLAN environments

Conclusion:. VxLAN is a great technology which helps to overcome the limitations of VLAN in a scaled environment involving Virtual machines and Containers.

 

From Quality Shift left to Continuous Quality

By Bouchra Gallucio

Vice President QMO – Healthfirst

Quality Shift Left (QSL) may be a new terminology but by all means it’s not a new concept. This is nothing but inserting quality controls and checks early on in a software development process. But what’s new is really the tools and the software development frameworks like Agile and CI-CD that are making QSL happen, easier to implement and even critical to the overall success. In this article, I will attempt to go over how QSL fits in the Agile and CI-CD delivery pipeline, and how is QSL different from Continuous Quality.

In a traditional waterfall process, the idea of SQL is always to get-in early and quality review/check business requirements and technical designs, even writing test cases and scripts early on, but testing is always delayed both QA and UAT are left till the end, which still provides a late feedback, and then you are faced with making tough decision based on quality vs timeline.

In an Agile and CI-CD environment, quality is embedded in the iterative process from backlog refinement to defining acceptance criteria, to testing small pieces of functionality and code early on (stories validation) and early business review and approvals. This is QSL on steroids. In fact QSL becomes also part of the iterative process and it’s hard to stay anymore where it starts and where it stops, those the concept of Continuous  Quality. Agile teams are building quality in as part of their process. Concepts like BDD/ATDD, TDD, Code Peer review/Pair programing, early code scanning, …etc are nothing but ways to make sure we produce a quality product in the first place not 6 months or a year later.

Continuous Quality

In the world of Release Early, Release Often, with Higher Confidence, you need to shift left but you also need to be able to do it fast and repeat it in shorter cycles, this is where Continuous Quality comes in. It is about a quality though out the continuous delivery pipeline at the Dev but also at the Ops readiness and user experience after deployment. This is Quality 360 as I call it not just left. The “It works per requirement” is no longer enough, product’s quality is also now looking at:

  • Experience:
    • Continuous experience.
    • Ambient experience.
  • Massive scale —highly automated.
  • Exploratory testing —shifting roles

Based on my 20 years’ experience promoting software quality through new ideas and approaches, I found that there are some main ingredients that will make your journey successful:

  • Changing perspectives and cultures
  • Leveraging the right tools
  • Shift from traditional QA metrics to something like a Quality Science
  • Helping your team upgrade their skills and manage their anxiety

Changing perspectives

In order to achieve Continuous Quality, some of the traditional QA and delivery perspectives need to change

  • Shift to focus on decentralized Program Teams vs more isolated Center Of Excellence
  • Don’t wait for “perfection”, take a test measured risk approach
  • Automation is your starting point, not end point.
  • Build teams based on Test engineers that understand code.

Tools

Look for a set of tools that align with a set of strategic planning assumption like:

  • Often a shift from commercial to open source
  • Code-oriented solutions with integration to DevOps tool chain (i.e., CI)
    • Code: Review, static analysis, unit tests
    • User experience —responsive design —visual
  • Cross Browser Testing, BrowserStack, Applitools, Ivity Labs
  • Desired response

Gartner predicts that “By 2020, Selenium WebDriver will become the standard for functional test execution engine, and this will marginalize vendors that can’t provide strong, higher-level test functionality.”

Automation

Strong Drive for Automation Doesn’t Remove a need for manual testing of course, but in a Continuous Quality and CI-CD world, repeatability and speed are important so without a strategic automation planning, CQ is impossible to achieve.  But automation here is a lot more than your usual regression automation, it’s looking at automation  in everything testers do from designing test cases and scope all the way to validating deployments and everything in between.

Quality as Science – Better ways to measure Quality

Where do Bugs come from? Can you exactly pin point to the specific area(s) that are causing more defects? Real value of QA and test is generally misunderstood; we focus on cost of quality, rather than value of quality and the cost of poor quality. This why the old QA metrics and dashboards of defects leakage, pass rate, Sum of defects… is not helpful anymore, executives are asking for more insightful data analysis of Quality like:

  • Driven by questions:
    • How frequently does it occur?
    • How many people/processes are affected?
    • How much time/revenue has been lost?
  • Share information:
    • Customer observations. (QC feedback loop)
    • Test/Controls that are useful.
    • Anomalies you found and how.

Both zero and 1, can it be?

by Shalini Milcah

Yes, that’s the power of superposition, which allows a quantum computer to simultaneously process every possible combination of 0 and 1. This leads to a misconception that quantum computers can achieve exponential speedups by simply processing every possible input in parallel, via superposition but the catch is that superposition alone is useless, because the measurement of a superposition is inherently random.  At its core, the real innovation of quantum algorithms is the use of wave interference to de-randomize the superposition. For most cases, the overhead of the quantum computing model is costly enough that the quantum speedup is small or non-existent but for some important cases, quantum does indeed achieve an exponential speedup over classical (non-quantum) algorithms.

Classical computers use bits to represent and store information while quantum computers are different however. They use quantum bits also known as qubits.  These ‘qubits’ can be a one, a zero or a superposition of them both at the same time which can create nontrivial correlated states of several qubits, so-called ‘entangled states’.  One good way to think of this is to imagine a sphere where each pole is a different state. So, a classical ‘bit’ can be in one of two states, at either of the two poles of the sphere.  The difference with a quantum bit is that it can be at any point on the sphere, significantly increasing the number of states. This means that a quantum computer that uses qubits can store a ridiculous amount of information using less energy compared to a classical computer.  Since quantum bits can have multiple states, quantum computers will be millions of times faster than even the most powerful super computers that are in existence today.

Fortunately, now, quantum comp

uters are publicly cloud accessible.  Just five years ago, quantum computers were restricted to privately run laboratories but the landscape has changed dramatically since 2016, when IBM launched their ‘Quantum Experience‘, a free and publicly accessible quantum computer. Since then, other companies have also announced similar cloud services, and several more are expected in the next couple of years.

Broadly, two relevant timescales can be observed in the history of Quantum computing here:  Quantum computers in the 5-year horizon will be limited by error rates of about 0.1%, which means they will only support ~1000 instructions before failing. By contrast, the original algorithms (e.g. factorization, search) that motivated QC require hundreds of thousands of operations. Consequently, the dominant view till recently was that quantum computers would only achieve practical value in the 10+ year horizon, while we already have computers big enough to support error correction algorithms.  However, the recent discovery of hybrid classical + quantum algorithms has motivated a new timescale of relevance: the NISQ (Noisy Intermediate Scale Quantum) era which aims to achieve practical quantum speedups on computers that are being built now and in the next 5–10 years.

However, these NISQ hybrid algorithms are not rigorously proven to outperform classical algorithms. Nonetheless, there are strong empirical indications that NISQ algorithms will win. This is exciting because it means that quantum computing is very much at a point where reality and relevance meet.

Quantum computing is just the next step. There are still a lot of problems that we cannot solve very easily, like solving a linear system of equations, optimizing parameters for support vector machines, finding the shortest path through some arbitrary graph, or searching through an unstructured list. They are rather abstract problems right now but knowing the complexity involved in these algorithms or programming, we can see how useful this could turn out to be.

It’s not immediately clear where it will be most effective, but given the recent trends of technology, its applications might include such things as the big data revolution, where we try to use machine learning algorithms to process enormous amounts of data and another application might be cryptography.  Quantum computers operate on completely different principles to existing computers, which makes them well suited to solving particular mathematical problems, like finding very large prime numbers. Since prime numbers are so important in cryptography, it’s likely that quantum computers would quickly be able to crack many of the systems that keep our online information secure. Due to these risks, researchers are already trying to develop technology that is resistant to quantum hacking, and on the flipside of it, quantum-based cryptographic systems would possibly be much more secure than their conventional analogues.

The potentially new and valuable technologies debated in Quantum Computing till date include: quantum simulation, quantum sensors, quantum imaging, quantum clocks, and quantum software and algorithms.

In quantum simulation, purpose-built quantum computers would perform quantum-mechanics-level modelling of materials, which would be impractical on today’s classical computers. The simulations would elucidate the fine structures of superconductors and map out complex chemical reactions to predict whether a newly engineered material would be stable.

Quantum sensors and quantum imaging will be especially useful in medicine. For example, they’ll allow new ways to sense the heart’s magnetic field, which could more accurately diagnose and distinguish heart diseases. Being able to obtain images of things that we’ve never been able to see before.

Quantum clocks, which track the vibrations of a single atom to provide almost unimaginable accuracy, will serve a wide range of purposes including accurate measurements of the local gravity potential and precision timing of financial transactions.  It is reported that the best of these quantum clocks could be made so accurate that they’d gain or lose no more than 1 second every 30 billion years.

New quantum algorithms could allow quantum computers to process data at a much higher speed, allowing for database searches, machine learning, and image recognition with unprecedented speed. Making use of such algorithms might be made easier for a broader range of coders because of quantum compilers that Microsoft and others are working on.

At this point, the value proposition of quantum computers has not been realized. Only a very few companies (Amazon, Google, Microsoft, IBM) are spending millions on their development. If achieved, a narrow but important class of problems will finally be solvable.  Some others, predict quantum computers may not work in practice because of limitations imposed by thermodynamics. Theoretically, they are well understood and extremely exciting.

Are Interactive shows the future of viewership?

by Rajalakshmi M

Does it matter if Stefan chooses Sugar Puffs over Frosties? Does it matter if Stefan chooses Tangerine Dream’s Phaedra or Isao Tomita’s The Bermuda Triangle? Well to know more the reader needs to go watch Netflix’s “choose-your-own-adventure” interactive drama called “Bandersnatch”.

For the uninitiated – “Black Mirror: Bandersnatch” is the story of Stefan Butler, a young man building a computer game in the year 1984. His game is based on a “choose-your-own-adventure” book, also called Bandersnatch. The game design is supposed to allow the player to make binary choices at various points in the game and then see the consequences of the choices as the game’s plot moves forward. And this is the summary of what happens to the viewer as well, where we as viewers can decide what happens with Stefan’s life, where we get to choose something as trivial as a cereal and to something as consequential as … Well I don’t want to spoil it for the innocent reader.

If the movie were to be considered linearly the acting is subtle, story engaging and the retro setting perfect. In a game we always know the end game is winning, but in a plot what should we expect? A happy ending? And the interactivity of the game lets the viewer decide what is this happiness!  The choices are emotionally engaging for the script writer gives the huge responsibility of what happens to the character with the viewer? And all a viewer gets is 10s to decide. In a movie the viewer gets sucked into characters with feelings of amazement and wonder as to what follows next in a very passive way. The viewers might want a million other possibilities for their favourite character. But all they get is what the scriptwriter wrote. But Bandersnatch sucks in the viewers in to its plot as we think we can change what happens with our favourite character.

This format is not easy to make. The movie needed to develop a branching narrative and film all the branches.  And can all branches have a definite end? And how to take care of some probable infinite loops? So, some branches lead to dead-end and the viewers have to choose differently to get to a definite end. But will I have to start all over again? The creators of “Bandersnatch” instead use a narrative device of recapping the action up and make a relevant choice that can help to move the movie forward. This would definitely have meant a longer script a longer shoot and a technological preparedness to organize stories to have endless permutations. And above all a level headedness to ensure that every branch had an end or going back. There were times as a viewer I felt let down by the end that came with one choice. But credit must go to the fact that here comes a format that makes the user go back and watch to see if there could be another ending and thus understand more about the show and its premise. It is early days for interactive entertainment unlike its cousin interactive gaming.

But what does Bandersnatch provide to the entertainment industry?

  1. User involvement: Bandersnatch has been an experiment on the viewer. “If bad things happen, you’ll feel even more crestfallen, because you were responsible,” said Todd Yellin, Netflix’s vice president for product. “If the character is victorious, you’ll feel even more uplifted because you made that choice.” Thus, interactive content can take a passive laid back experience of switching on the TV remote and watching to an active lean forward, wait, participate and watch experience.
  2. User selections: Every user selection in the show helps build the data gold mine. With every choice of its 137 million subscribers, Netflix gets to understand the personality of its viewer.  The choice of cereal could help it understand consumer choices of its viewers. The emotional choices can help Netflix understand the content leanings thereby helping it curate more personalized content for its viewers.
  3. Content Piracy protection – Interactive content could make piracy difficult for interactivity needs a platform.
  4. Dynamic Content- A new branch can be added in between, and a new outcome shown. This keeps the viewers on their toes helps the show marketing team to get more views.
  5. Technological Future: The format of the content throws the possibility of using VR-enabled technology for movie viewing to take movie watching experience to the next level.

But what are the repercussions of the format? The setting in 1984 could be an ode to the George Orwell’s classic Nineteen Eighty Four that touches on the theme of omnipresent government surveillance and propaganda. But if a platform can track users’ fantasies intricately then isn’t it easier to spread propaganda and track people?  But like Bob Bejan who runs experiential marketing in Microsoft says about interactive movies   “There’s a finite amount of media the filmmaker creates, which he slices and dices to give the illusion of control while at the same time guiding the viewer through the underlying blueprint” , then does the viewer really choose? If watching TV is a way to just switch off, does the need to make choices for a show to move forward serve the purpose at all? And, only time will tell if more shows come and viewers like it.  And whether users love this illusion of control with more work needing to be done to watch the show and by giving away the psyche?

Dear viewer the choice is yours!

Sources:

  1. https://www.netflix.com/in/
  2. https://www.nytimes.com/2018/12/28/arts/television/black-mirror-netflix-interactive.html

The #10YearChallenge – a vaccine for Artificial Stupidity

Jayashree Subramanian

On a cloudy, cold morning in Chennai, India, my smart devices went berserk. Not a big deal, people going crazy on social media is as common as dogs barking. Perhaps, more so. What could be crazier than people getting off a moving car (Kiki challenge) or going about, blindfolded (Birdbox challenge)? The 10-year challenge pales in comparison to these two, it is not crazy at all. This challenge however principally differs from other social media challenges by its very different purpose and unobvious motto. The #10yearchallenge is not another marketing gimmick for customer engagement feeding our egos and playing on the narcissism of the millennials. It is part of a bigger plan, a plan to train Facebook’s facial recognition programs, to teach them how people age, to find the programs on simulating younger or older versions of people.  Seems like a bit of a stretch?

I only discussed in the last edition of engage about the fiasco created by two AI bots on Facebook. Facebook’s image recognition AI is perhaps not as intelligent, and requires training. From millions and millions of data sets. Still not clear? Kate O’Neil puts it clearly on Wired.co.uk, “Imagine that you wanted to train a facial recognition program on age-related characteristics and, more specifically, on age progression (e.g., how people are likely to look as they get older). Ideally, you’d want a broad and rigorous dataset with lots of people’s pictures. It would help if you knew they were taken a fixed number of years apart—say, 10 years”.  Let me explain further.

Imagine, that a dangerous personality like Bin Laden was still alive, and goes underground and resurfaces to the normal world, after 10 years. One day, he gets hungry, and is all out of snacks. So, he goes to a nearby Walmart to get some snickers and cola. Normal folks like us probably won’t recognize him because, all the file photos and videos that we saw on TV were from 10 years ago. The CCTV cameras and the facial recognition programs in the current avatar would be none the wiser because they would also be matching the faces in the live feed with file photos from 10 years ago.

If these programs were ‘taught’ how people age, how their features change over a fixed period of time, say 10 years; Then, in 2020, the programs would ‘simulate’ faces, ‘predict’ how Bin Laden person would look in 2020 at the age of 62, and match the faces from the live feed with the simulated face that is similar to how the person currently looks. Now, this program would most certainly recognize Bin Laden, if he ever fancied some snickers and walked into a surveillance zone.

There would be no need for half lockets or family songs to find lost family members or siblings You could just pop their pictures into a computer, ask the system to simulate a picture of how the person would look like in the present. A bunch of Bollywood movies would have a very different ending, won’t they? “Yaadon ki Baarat”, the world’s first movie in the ‘Bollywood’ genre wouldn’t have been created, had this AI program been available in 1975. Goodness, no!  Jokes aside, simulating age progression is just a simple and straight forward application of such a training. The learnings from this data set could have far more use cases and applications.

So, you might think, Facebook already has pictures of me from 10 years ago. Pictures of me all through these ten years. Pictures of my dog, my sibling, parents, my old roommates, things I don’t even remember until they come up as a ‘Memory’ on my Facebook wall. So, why does Facebook have to gain by from a single picture of mine from 2009 and a picture from now?

How AI programs learn, is not very different from how humans learn. Imagine you want to teach a child, the names of fruits. What do you do? You won’t go for a cosmopolitan or a Times to teach a child fruits, right? Why? Because there probably won’t be fruits there. Even if there was, you won’t find all fruits there. There would be more unwanted information (noise) than the information you need. So, you would go for a picture book with a set of pictures of fruits with clear labels. These days there are also interactive games on your smart devices. But basically, they both have a clear defined set of fruits, with clear names and they are meant for the express purpose of teaching children the names of fruits.

Now, if Facebook wanted to teach its facial recognition AI, how people’s features changed over the years, it’s not going to go for all the pictures in your profile. Firstly, it has too much of noise (Pictures of your dog, your old roommates, pictures of the pasta you made last month). Secondly, you didn’t upload the pictures in a chronological order. Especially around 2009 when smart phones were not very prevalent. You might have uploaded a picture from 2007 in 2009, a picture from 1991, when you were a baby, in 2013.  You might have uploaded a scanned picture of your mother from 1970’s in 2009. That did not mean that the picture was indeed taken in 2009. Now, you and I as humans would understand that this picture is not from 2009 but can an AI program? The answer is, most probably, not. The program might even mistake your mother for you, if you share physical similarities. Maybe you gave a caption to the picture, say, “My beautiful mother”. AI could then understand that it’s a picture of your mother. But, it has no way to tell that it’s an old picture from 1970’s and not from 2009. Because, AI is, to be blunt, stupid. Like a small child, who is yet to learn things. At least until we teach them.

AI programs learn from huge data sets. They need to be clean and labelled. Teaching an AI program, age progression using the plethora of pictures on FB, would be very inefficient and there is very less probability that the program learns what was intended. It’s like trying to teach a child names of fruits using the Times or Cosmopolitan. There are little bits of useful data (fruits), and lots of unnecessary data (anything other than fruits), and the child will most probably get distracted and confused. Enter the 10-year challenge. Millions of pictures that are exactly 10 years apart. Between 2009 and 2019; Along with other important data like the object’s (you) gender, age, race, etc., Millions and millions of such pictures, available to the AI program to learn from, with context.

The programs are then at liberty to measure the angle of the sag of the under-eye bags, the number of lines in the crow’s feet, or the presence of absence of it from millions of datasets; Store them, compare them and find a pattern and establish a general principal. ‘A learning’, if you will. And voila, in a few months, Facebook might be able to simulate with decent accuracy how people change as they grow old. How are 13-year-olds likely to look when they are 22.  Soon, they’ll probably be able to predict how your child would look like when he/she grows up.  A Bin Laden or anybody who is classified as dangerous by the Government, most certainly can’t step out for some snack. Able to picture the power of such an AI program now? The possibilities? Do you still think this is just another social media, trend? Of course, not.

I’ve been talking like this trend was started by FB. That’s not quite accurate. Like all other trends and challenges, it is unclear as to who created this “trend”, but seeing as how much FB stands to gain, the trend could well have been created by the tech giant Facebook itself.

So, why now, in 2019? Why didn’t it happen before? Say, in 2018 or 2017? Some analysts opine, and I agree, that FB was simply not that popular before 2009. It was the year that FB millions in investor funding along with millions of users. So, a 10-year challenge in 2017 wouldn’t have made much sense as there weren’t many pictures available in 2007. FB itself was founded in 2004 and was made open to public only in 2006. So, it makes perfect sense to have such a trend in 2019.

I’m not saying you should be afraid of technology giants and I’m definitely not spreading paranoia. I’m just telling you, to stay smart about all of it, and join me in admiring the power, possibility and stupidity of AI.

The limitations that we consciously put on our AI programs are called artificial stupidity and that could very well be the key to preventing AI from taking over. Let’s continue discussing artificial stupidity in the next edition, Stay tuned!

 

Quotes / page fillers:

“Because, AI is, to be blunt, stupid. Like a small child, who is yet to learn things. At least until we teach them.”

“The limitations that we consciously put on our AI programs are called artificial stupidity and that could very well be the key to preventing AI from taking over”

Evolution of Big Data

by Saviour Nickolas Derel Joseph Fernandez

The term “Big Data” may have been around for some time now, but there is still quite a lot of confusion about what it means. In truth, the concept is continuously evolving, as it remains the driving force behind many ongoing waves of digital transformation, including artificial intelligence, data science and the Internet of Things. But what exactly is Big Data and how is it changing our world?

Big Data:

It all starts with the explosion in the amount of data we have generated since the dawn of the digital age. This is largely due to the rise of computers, the Internet and technology capable of capturing data from the world we live in. Going back even before computers and databases, we had paper transaction records, customer records etc. Computers, and particularly spreadsheets and databases, gave us a way to store and organize data on a large scale. Suddenly, information was available at the click of a mouse.

We’ve come a long way since early spreadsheets and databases, though. Today, every two days we create as much data as we did from the beginning of time until 2000. And the amount of data we’re creating continues to increase rapidly.

Nowadays, almost every action we take leaves a digital trail. We generate data whenever we go online, when we carry our GPS-equipped smartphones, when we communicate with our friends through social media or chat applications, and when we shop. You could say we leave digital footprints with everything we do that involves a digital action, which is almost everything. On top of this, the amount of machine-generated data is rapidly growing too.

How does Big Data work?

Big Data works on the principle that the more you know about anything or any situation, the more reliably you can gain new insights and make predictions about what will happen in the future. By comparing more data points, relationships begin to emerge that were previously hidden, and these relationships enable us to learn and make smarter decisions. Most commonly, this is done through a process that involves building models, based on the data we can collect, and then running simulations, tweaking the value of data points each time and monitoring how it impacts our results. This process is automated – today’s advanced analytics technology will run millions of these simulations, tweaking all the possible variables until it finds a pattern – or an insight – that helps solve the problem it is working on.

Anything that wasn’t easily organised into rows and columns was simply too difficult to work with and was ignored. Now though, advances in storage and analytics mean that we can capture, store and work with different types of data. Thus, “data” can now mean anything from databases to photos, videos, sound recordings, written text and sensor data.

To make sense of all this messy data, Big Data projects often use cutting-edge analytics involving artificial intelligence and machine learning. By teaching computers to identify what this data represents– through image recognition or natural language processing, for example – they can learn to spot patterns much more quickly and reliably than humans.

Industrial impact of Big Data in 2020:

Machine Learning and Artificial Intelligence will proliferate

The deadly duo will get beefed up with more muscles. Continuing with our round-up of the latest trends in big data, we will now take stock of how AI and ML are doing in the big data industry. Artificial intelligence and machine learning are the two sturdy technological workhorses working hard to transform the seemingly unwieldy big data into an approachable stack. Deploying them will enable businesses to experience the algorithmic magic via various practical applications like video analytics, pattern recognition, customer churn modelling, dynamic pricing, fraud detection, and many more. IDC predicts that spending on AI and ML will rise to $57.6 billion in 2021. Similarly, companies pouring money into AI are optimistic that their revenues will increase by 39% in 2020.

Raise of Quantum Computing

The next computing juggernaut is getting ready to strike, the quantum computers. These are the powerful computers that have principles of Quantum Mechanics working on their base. Although, you must wait patiently for at least another half a decade before the technology hits the mainstream. One thing is for sure; it will push the envelope of traditional computing and do analytics of unthinkable proportions. Predictions for big data are thus incomplete without quantum computing

Edge analytics will gain increased traction

The phenomenal proliferation of IoT devices demands a different kind of analytics solution and edge analytics is probably the befitting answer. Edge analytics means conducting real-time analysis of data at the edge of a network or the point where data is being captured without transporting that data to a centralized data store. For its on-site nature, it offers certain cool benefits: reduction in bandwidth requirements, minimization of the impact of load spikes, reduction in latency, and superb scalability. Surely, edge analytics will find more corporate takers in future. One survey says between 2017 and 2025, the total edge analytics market will expand at a moderately high CAGR of 27.6% to pass the $25 billion mark. This will have a noticeable impact on big data analytics as well.

Dark data

So, what is Dark Data, anyway? Every day, businesses collect a lot of digital data that is stored but is never used for any purposes other than regulatory compliance and since we never know when it might become useful. Since data storage is easier, businesses are not leaving anything out. Old data formats, files, documents within the organization are just lying there and being accumulated in huge amounts every second. This unstructured data can be a goldmine of insights, but only if it is analysed effectively. According to IBM, by 2020, upwards of 93% of all data will fall under Dark Data category. Thus, big data in 2020 will inarguably reflect the inclusion of Dark Data. The fact is we must process all types of data to extract maximum benefit from data crunching.

Usage:

This ever-growing stream of sensor information, photographs, text, voice and video data means we can now use data in ways that were not possible before. This is revolutionising the world of business across almost every industry. Companies can now accurately predict what specific segments of customers will want to buy, and when to buy. And Big Data is also helping companies run their operations in a much more efficient way.

Even outside of business, Big Data projects are already helping to change our world in several ways, such as:

  • Improving healthcare: Data-driven medicine involves analysing vast numbers of medical records and images for patterns that can help spot disease early and develop new medicines.
  • Predicting and responding to natural and man-made disasters: Sensor data can be analysed to predict where earthquakes are likely to strike next, and patterns of human behavior give clues that help organisations give relief to survivors and much more.
  • Preventing crime: Police forces are increasingly adopting data-driven strategies based on their own intelligence and public data sets in order to deploy resources more efficiently and act as a deterrent where one is needed.
  • Marketing effectiveness: Big Data, along with being able to help businesses and organizations in making smart decisions also drastically increases the sales and marketing effectiveness of the businesses and organizations thus highly improving their performances in the industry.
  • Prediction and Decision making: Now that the organizations can analyse Big Data, they have successfully started using Big Data to mitigate risks, revolving around various factors of their businesses. Using Big Data to reduce the risks regarding the decisions of the organizations and making predictions has become one of the many benefits coming from big data in industries.

Concerns:

Big Data gives us unprecedented insights and opportunities, but it also raises concerns and questions that must be addressed:

  • Data privacy: The Big Data we now generate contains a lot of information about our personal lives, much of which we have a right to keep private
  • Data security: Even if we decide we are happy for someone to have our data for a purpose, can we trust them to keep it safe?
  • Data discrimination: When everything is known, will it become acceptable to discriminate against people based on data we have on their lives? We already use credit scoring to decide who can borrow money, and insurance is heavily data-driven.
  • Data quality: Not enough emphasis on quality and contextual relevance. The trend with technology is collecting more raw data closer to the end user. The danger is data in raw format has quality issues. Reducing the gap between the end user and raw data increases issues in data quality.

Facing up to these challenges is an important part of Big Data, and they must be addressed by organisations who want to take advantage of data. Failure to do so can leave businesses vulnerable, not just in terms of their reputation, but also legally and financially.

Conclusion:

As we all know Data started with 0’s and 1’s, but now it has evolved much more than our expectation, that’s how our technology has grown. It is going to be increasing further in the coming future, in simple term “Data rules the world”.

Data is changing our world and the way we live at an unprecedented rate. If Big Data is capable of all this today – just imagine what it will be capable of tomorrow. The amount of data available to us is only going to increase, and analytics technology will become more advanced.

For businesses, the ability to leverage Big Data is going to become increasingly critical in the coming years. Those companies that view data as a strategic asset are the ones that will survive and thrive. Those that ignore this revolution risk being left behind.

 

Software Defect Prevention – in a nutshell

Purushotham Narayana

 

Organizations face many problems that impede rapid development of software systems critical to their operations and growth. The challenge in any software product development lies in minimizing the number of defects. Occurrence of defects is the greatest contributor to significant increases in product costs due to correction and rework time. Most defects are caused by process failures rather than human failures. Identifying and correcting process defects will prevent many product defects from recurring.

This article will present various tools and techniques for use in creating a Defect Prevention (DP) strategy that, when introduced at all stages of a Software life cycle, can reduce the time and resources necessary to develop high quality systems. Specifically, how implementing a model-based strategy to reduce Requirement Defects, Development Rework and Manual test development efforts will lead to significant achievements in cost reduction and total productivity.

 Defect Prevention

Defect Prevention (DP) is a strategy applied to the software development life cycle that identifies root causes of defects and prevents them from recurring. It is the essence of Total Quality Management (TQM). DP, identified by the Software Engineering Institute as a level 5 Key Process Area (KPA) in the Capability Maturity Model (CMM), involves analyzing defects encountered in the past and specifying checkpoints and actions to prevent the occurrence of similar defects in the future. In general, DP activities are a mechanism

for propagating the knowledge of lessons learned between projects.

Mature IT organizations have an established software process to carry out their responsibilities. This process is enhanced when DP methodologies are implemented to improve quality and productivity and reduce development costs. Figure 1 clearly depicts that identifying defects late in the game is costly.

Figure 1: Software Defect Rate Of Discovery Versus Time

A model for an enhanced Software Process, including a DP strategy, is presented in Figure 2. Adopting a DP methodology will allow the organization to provide its clients with a product that is “of High Quality and Bug Free.”

Figure 2: Defect Prevention Strategy for Software Development Process

On a macro level defects can be classified and filtered as depicted in Figure 3.

Figure 3: Filter or Whirlpool Diagram for Software Defects

Features of Defect Prevention

 

Management must be committed to following a written policy for defect prevention at both the organization and project level. The policy should contain long-term plans for funding, resources and the implementation of DP activities across the organization including within management, to improve software processes and products. Once in place, a review of results provides identification of effective activities and lessons learned to further improve the organization’s success in applying a DP strategy.

To assist in the successful implementation of a DP strategy, members of the software-engineering group and other software-related groups should receive training to perform their DP activities. Training should include software quality assurance, configuration management and document support and focus on DP and statistical methods (e.g., cause/effect diagrams and Pareto analysis).

Creation of an Action Plan plays a key role in the implementation process. At the beginning of a software task, the members of the team meet to prepare for the task and related DP activities. A kick-off meeting is held to familiarize members of the team with details of the implementation process. Included in the meeting is information related to the software process, standards, procedures, methods, and tools applicable to the task, with an emphasis on recent changes; inputs required and available for the task; expected outputs; and methods for evaluation of outputs and of adherence to the software process. A list of common errors and recommended preventive actions are also introduced along with team assignments, a task schedule and project goals.

Periodic reviews are conducted by each of the teams assigned to coordinate DP activities. During the reviews, action items are identified and priorities set based on a causal analysis that determines:

  • the causes of defects,
  • the implications of not addressing the defects,
  • the cost to implement process improvements to prevent the defects, and
  • the expected impact on software quality.

 

A pareto analysis is helpful in setting priorities and provides direction for assignment of action items or reassignment to other teams, making changes to activities and documenting rationale for decisions.

Case Study

A case study of a real time scenario is discussed below along with statistics derived from the analysis.

The Reference Line

 As a first step, a “Defect Analysis of Past Projects” was performed to create a reference line for the PIE. As many as 1,336 defects were analyzed from the base line project (TETRA Released) and two other projects to increase statistical significance. A detailed Root Cause Analysis was performed on all defects and the Beizer [3] Taxonomy was used as the “classification vehicle”. Analysis was done for five development phases, namely: Requirement Specifications, Architectural Design, Detailed Design, Coding and System Test Case Preparation. Based on this analysis, specific Defect Prevention (DP) solutions were determined for each of the phases.

The Beizer Taxonomy included ten major categories, each of which was divided into three levels, resulting in a 4-digit number which specifies unique defects. The ten top level categories were:

0xxx Planning
1xxx Requirements and Features
2xxx Functionality as Implemented
3xxx Structural Bugs
4xxx Data
5xxx Implementation
6xxx Integration
7xxx Real-Time and Operating System
8xxx Test Definition or Execution Bugs
9xxx Other

The causes of the defects as determined by the engineers doing the classification, fell into four major categories: Communication, Education, Oversight and Transcription.

In creating the reference line, detailed interviews with 24 software engineers took place. The interviews allowed a full understanding of the reason for each defect, classification of the cause and an understanding of defect prevention activities. This data mining was performed on all defects, resulting in a series of classification tables and a Pareto analysis of the most common problems. The results of the pareto analysis according to the Beizer Taxonomy top level categories are presented below with the breakdown in descending order.

  • Requirements and Features (1xxx) 47.0%
  • Functionality as Implemented (2xxx) 13.5%
  • Structural Bugs (3xxx) 9.3%
  • Implementation (5xxx) 8.3%
  • Data (4xxx) 6.9%
  • Integration (6xxx) 5.7%
  • Real time and Operating system (7xxx) 4.9%
  • Test definition or Execution bug (8xxx) 4.3%

Within each development phase in the baseline project, the defects were further classified based on the Beizer Taxonomy. For example, in the Requirement Specifications Phase, the second level breakdown of the main defects occurred as follows:

  • Requirement Completeness (13xx) 37.5%
  • Requirement Presentation (15xx) 34.7%
  • Requirement Changes (16xx) 11.2%
  • Requirement Incorrect (11xx) 8.7%

The third level breakdown of the main Requirement Completeness defect was:

  • Incomplete Requirements (131x) 73.4%
  • Missing, unspecified requirements (132x) 11.2%
  • Overly generalized requirements (134x) 4.6%

The same type of data analysis was performed for each development phase selected for the PIE. The next step was to identify a tool-set of phase-specific improvement activities, based on the root cause analysis, that would prevent defects from recurring in the next release. Highest priority was given to the most common defect types. Extensive training and phase kickoff meetings were held to empower the development team to integrate DP activities into the existing process. The development team then applied the improvement activities determined in the analysis phase to the development phases, and ongoing defect recordings and measurements were performed.

The final step was to compare the numbers and types of TETRA Release 2 defects with those of the reference line. The effectiveness of the prevention tool-set was measured in the quantity and types of defects found in the second release of the project. The effective prevention actions could then be integrated into the OSSP to improve quality and cycle time for all the projects in MCIL. The impact on the OSSP, including changes to Review Guidelines and changes to the Phase Kickoffs, are considered part of the PIE results.

Results and Conclusion

 As a result of the project, the overall number of defects in Tetra Release 2 has decreased by 60% as compared to the number of defects detected in TETRA Release 1 (the reference line project). In part, this is attributed to the fact that Release 2 is a continuation project and not an initial project as was Release 1, and that later releases usually have less defects due to more cohesive teams, greater familiarity with the application domain, experience, and fewer undefined issues. Based on numbers from other MCIL projects, we estimate that half of the defect decrease can be attributed to the implementation of the PIE. A breakdown of defects by phase of origin shows the following results.

Table 1: Breakdown of Defects by Phase

The absolute reduction in defects, which relates to the % Improvement shown in the above table, can be observed in the following figure.

Figure 4: Reduction of Defects by Phase

The obvious observation is that a higher percentage of the defects migrated to later phases of the development process: from Requirement Specifications, Preliminary Design and Detailed Design, to Coding. In Tetra Release 1, 76.5% of the defects are in the Requirement and Design phases and only 23.4% are in Coding, while in Tetra Release 2, 45.5% of the defects are in Requirement and Design and 54.5% are in Coding. This implies that the DP methods employed in the early phases of development were very effective.

The % Improvement column shows the improvement within each development phase with respect to the absolute number of defects. This is a different view of the improvement in the number of defects, partially attributable to the Improvement Actions.

Another comparison was made in respect to the Cause category with the following results.

You can read the entire article here – https://www.isixsigma.com/tools-templates/software-defect-prevention-nutshell/

 

References

 [1] R.G. Mays, C.L. Jones, G.J. Holloway, D.P. Studinski, Experiences with Defect Prevention, IBM Systems Journal, Vol 29, No. 1, 1990.
[2] Watts S. Humphrey, Managing the Software Process, Chapter 17 – Defect Prevention, ISBN-0-201-18095-2.
[3] Beizer Boris., Software Testing Techniques, Second edition, 1990, ISBN-0-442-20672-0.

Out of the trenches to AIOps – the Peacekeeper

Bindu Vijayan

The last thing an IT team wants to hear is ‘there is an issue’ which usually has them rushing to ‘battle zones’ to try and resolve – ‘problem with the apps?’, ‘is it the network?’, desperately trying to kill the problem while it grows larger within the Enterprise.  No credits for crumbling SLAs, the fire-fighting continues long and hard sometimes.

IT Operations are most times battling heavy volumes of alerts, having to deal with hundreds of incident tickets that come from the environment, from the performance of its apps and infrastructure. They are constantly overwhelmed trying to manage and respond to every alert in order to avoid the threat of outages and heavy losses.

Increasing components within the infrastructure; today a stack can have more than 10,000 metrics, and that sort of complexity runs the threat of increase in points of failure, and with the addition of speedier change cycles provided / supported by DevOps, cloud computing and so on, there really is very little time to take control or take action. Under such circumstances, AIOps is fast emerging as a powerful solution to deal with the constant battle, with the efficiency that AI and ML can bring in. We are looking more and more into unsupervised methods / processes, to read data and make it coherent, make it ‘see the unknown unknowns’, and remediate/ bring problems into focus before it impacts customers. Adopting AI into IT Operations provide an increased visibility into operations through Machine Learning and the subsequent reduction in incidents, false alarms and the advantage of predictive warnings that can do away with outages.  It means insights are implemented thru automation tools leading to saving time and effort of the concerned teams.

With AIOps gathering and processing data, we require very little or almost nil manual intervention where algorithms help automate, due diligence gets done, and rich business insights are provided. AIOps becomes the much sought-after solution to the multitudinous problems in complex IT Enterprises.

“The global AIops Platform market is expected to generate a revenue of US$ 20,428 billion with a CAGR of 36.2% by 2025. – reports Coherent Market Insights

Gartner recommends that AIOps is adopted in phases. Early adopters typically start by applying machine learning to monitoring, operations and infrastructure data, before progressing to using deep neural networks for service and help desk automation.

The greatest strength with AIOps is that it can find all the potential risks and outages that may happen in the environment which can’t be done or anticipated by humans, and these operations can be conducted with greater consistency and time to value. The

complexity of an IT Enterprise is so huge though this makes an ideal scenario of ML, Data Science and Artificial Intelligence to help solutioning with specific, machine learning algorithms which is impossible for humans to reduce them in simple instructions and remediations. AIOps becomes the real answer to tackle critical issues and at the same time, it eliminates all the false positives that usually makes up a large percentage of ‘events’ that is reflected in monitoring tools.

Gartner predicted that by this year about 25% of the enterprises, globally, would implement an AIOps platform.  And that obviously means increasing complexities and huge data volumes but deep insights and more intelligence within the environment.  Experts say that this implies that AI is going to reach right from the device or environment till the customer.

ChatOps

AIOps is fast paced; it is believed that in the next decade majority of large Enterprises will take to ‘multi-system automations’ and will host digital colleagues – we are going to have virtual engineers to attend to queries and tasks.  IT Service desks are going to be ‘manned’ by digital colleagues, and they are going to take care of the frequent and mundane tasks with almost nil or minimal human intervention.  It is predicted that this year will see the emergence of ChatOps, where enterprises are going to introduce “AI based digital colleagues into chat-based IT Operations”, and digital colleagues will make a major impact on how IT operations function.

Establishing digital service desk bots brings in speed and agility into the service.  Reports say that actions which hitherto took up to 20 steps can now be accomplished with just one phrase and a couple of clarifications from the digital colleague.  This can save human labor hours and have their skills channeled to more important areas with mundane and frequent tasks such as password resets, catalogue requests, access requests and so forth being taken care of by digital colleagues. They can be entrusted with all incoming requests and those which cannot be processed by them are automatically escalated to the right human engineers.  Even L3 & L4 issues are expected to be resolved by digital colleagues with workflows being created by them and approved by human engineers. AI is going to keep recommending better and deeper automations, and we are going to see the true power of human / machine collaboration.

Humans will collaborate more and more with digital colleagues, change requests get created on a simple command with resolutions to be had within minutes / or assigned to human colleagues.  Algorithms are expected to integrate operations more and more.  Life with AI is going to make tasks such as identifying and inviting right people into root cause analysis sessions and have post resolution meetings to ensure continuous learning.

With AIOps, IT operations is going to reconstruct most tasks with AI and automation.  It is reported that 38.4% of organizations take a minimum resolution time of 30 minutes on incidents and adopting AIOps is definitely the key.  We may be looking at a future where we would have the luxury of an autonomous data center, and human resources in IT can truly spend their time on strategic decisions and business growth, work on innovation and become more visible to an organization’s growth.

 

Reference

https://www.coherentmarketinsights.com/market-insight/aiops-platform-market-2073 

AIOps – the answer to your IT’s complexities

Balaji Uppili, Chief Customer Success Officer, GAVS

With the increasing efficiency and sophistication of our IT systems, their complexity opens up a constant slew of challenges for IT Ops departments, and Artificial Intelligence for IT Operations (AIOps) today has emerged as the answer to manage such complexities.

AIOps combines the power of Big Data and Machine Learning and automation, and offers process automation independent of manual resources.  What makes AIOps a winner is its functionality of combining data driven insights from various systems and operational tools that brings significant improvements and probably the best solution for now and future along with cost efficiency.

Driving data through Analytics for meaningful, actionable insights and the subsequent optimization and transformation using Machine Learning that helps in informed decisions and enables IT Ops resources to spend more of their time in quality tasks to support business goals rather than fighting the day to day blips and glitches.

With GAVS’ GAVel picking so much attention, we thought this is the best time to bring you some insights from our Leadership.

 

In conversation with Balaji Uppili, Chief Customer Success Officer, at GAVS;

  • Why do you think AIOps is picking up pace suddenly when these issues have been existing in the industry for the past many years?

Balaji: AIOps is really picking up because the day to day operations are becoming more complex and the amount of operational issues are increasing by the day. Also, AIOps is no longer being used as operational tool but more strategic. This creates bandwidth from an operational standpoint and also in the cost aspects as well. This also provides lot more predictability and proactive approach.

  • What do you think is the next phase of AIOps?

Balaji: The various dimensions of Machine Learning like Reinforced Learning, Reinforced deep learning would definitely take off. A good interface with a virtual assistant / conversational assistant is the future.

  • Is AI truly helping infra team to be ahead of the curve or is it just a hype?

Balaji: It is not a hype at all. There is also no other alternative to it as well. AI is now the key expectation for running efficient and seamless operations.

  • How do you quantify the ROI to CIOs when they invest in these products?

Balaji: The quantification happens from reduction in license costs for various tools being deployed and also optimization due to automation and shift left from a process optimization stand point. These will have direct reduction in operational costs both asset and resources (labor included).

  • What makes GAVel the biggest differentiator in the market?

Balaji: Its ability to reduce noise in the operations (eliminate unwanted data points and also eliminate spikes due to seasonality in operations) world in an enterprise clubbed with its predictability capability (provide approaches to learn from past historical data and arrive at models for the future) which can help the CIO be ahead of the operations both from end user experience and costs.

  • How do you claim to have embraced AI in GAVel?

Balaji: The various algorithms are very AI driven. The self learning from both historical data and current environment and context and using that to predict the future is all in the platform.

  • What is the success rate your customer’s have seen, by adopting GAVel?

Balaji: As regards to automation, customers have seen at least 30%+ automation of operations and processes over  12 months and some even 50%+. With regards to noise reduction and correlations about 70%+(avoiding duplicates and eliminating seasonalities)  and predictions in some cases with about 80%+ probability.

  • How do you think an organization should evaluate the right AIOps platform. What parameters should they consider?

Balaji: The theme of our platform at GAVS’ is “Zero Incident”. How can we get all enterprises to a zero incident state? If that theme is applied then each and every aspect  of the operation is evaluated towards a zero incident journey and this will automatically result in massive cost savings and significant increase in end user experience.

  • By implementing AIOps platform, are organizations creating unemployment?

Balaji: This is a wrong myth and assumption. If we don’t automate and don’t become a responsive and agile enterprise, then the businesses won’t run. The future is all about AIOps and beyond and hence adoption of these concepts by the teams is critical. This will help the teams to reskill themselves to working on automation, data science and related areas and thereby enhance their own value both within the organization they serve and outside as well. The AIOps platforms are evolving to make you better and hence a change management and re-hash of the workforce goes with it.

  • How do you make sure GAVel is always ahead of the curve in the AIOps space?

Balaji: Our internal research and marketing team plays a huge role in keeping us ahead. In addition, our partnership with Microsoft, Gartner, Everest and more importantly with IIT Madras, does help us to be ahead of the curve.

GAVel is now at your fingertips…Deploy and get insights in 60 minutes, try it for free…