The Chatty Bots!

Padmapriya Sridhar

Chatbots can be loosely defined as software to simulate human conversation. They are widely used as textbots or voicebots in social media, in websites to provide the initial engagement with visitors, as part of  customer service/IT operations teams to provide tier 1 support round the clock and for various other organizational needs, as we’ll see later in the blog, in integration with enterprise tools/systems. Their prevalence can be attributed to how easy it has now become to get a basic chatbot up & running quickly, using the intuitive drag-drop interfaces of chatbot build tools. There are also many cloud-based free or low-cost AI platforms for building bots using the provided APIs. Most of these platforms also come with industry-specific content, add-on tools for analytics and more.

Rule-based chatbots can hold basic conversation with scripted ‘if/then’ responses for commonly raised issues/faqs, and redirect appropriately for queries beyond their scope. They use keyword matches to get relevant information from their datastore. Culturally, as we begin to accept and trust bots to solve problems and extend support; with companies beginning to see value in these digital resources; and with heavy investments in AI technologies, chatbots are gaining traction, and becoming more sophisticated. AI-led chatbots are way more complex than their rule-based counterparts and provide dynamically tailored, contextual responses based on the conversation and interaction history. Natural Language Processing capabilities give these chatbots the human-like skill to comprehend nuances of language and gauge the intent behind what is explicitly stated.     

The Artificial Neural Network(ANN) for Natural Language Processing(NLP)             

An ANN is an attempt at a tech equivalent of the human brain! You can find our blog on ANNs and Deep Learning here.

Traditional AI models are incapable of handling highly cognitive tasks like image recognition, image classification, natural language processing, speech recognition, text-speech conversion, tone analysis and the like. There has been a lot of success with Deep Learning approaches for such cerebral use cases. For NLP, handling the inherent complexities of language such as sentiment, ambiguity or insinuation, necessitates deeper networks and a lot of training with enormous amounts of data. Each computational layer of the network progressively extracts finer and more abstract details from the inputs, essentially adding value to the learnings from the previous layers. With each training iteration, the network adapts, auto-corrects and finetunes its weights using optimization algorithms, until it reaches a maturity level where it is almost always correct in spite of input vagaries. The USP of a deep network is that, armed with this knowledge gained from training, it is able to extract correlations & meaning from even unlabeled and unstructured data.

Different types of neural networks are particularly suited for different use cases. Recurrent Neural Networks(RNNs) are good for sequential data like text documents, audio and natural language. RNNs have a feedback mechanism where each neuron’s output is fed back as weighted input, along with other inputs. This gives them ‘memory’ implying they remember their earlier inputs, but with time the inputs get diluted by the presence of new data. A variant of the RNN helps solve this problem. Long Short Term Memory(LSTM) models have neurons(nodes) with gated cells that can regulate whether to ‘remember’ or ‘forget’ their previous inputs, thereby giving more control over what needs to be remembered for a long time versus what can be forgotten. For e.g.: it would help to ‘remember’ when parsing through a text document because the words and sentences are most likely related, but ‘forgetting’ would be better during the move from one text document to the next, since they are most likely unrelated.

The Chatbot Evolution

In the 2019 Gartner CIO Survey, CIOs identified chatbots as the main AI-based application used in their enterprises. “There has been a more than 160% increase in client interest around implementing chatbots and associated technologies in 2018 from previous years”, says Van Baker, VP Analyst at Gartner.

Personal & Business communication morphs into the quickest, easiest and most convenient mode of the time. From handwritten letters to emails to phone calls to SMSs to mere status updates on social media is how we now choose to interact. Mr. Baker goes on to say that with the increase of millennials in the workplace, and their  demand for instant, digital connections, they will have a large impact on how quickly organizations adopt the technology.

Due to these evolutionary trends, more organizations than we think, have taken a leap of faith and added these bots to their workforce. It is actually quite interesting to see how chatbots are being put to innovative use, either stand-alone or integrated with other enterprise systems.

Chatbots in the Enterprise

Customer service & IT service management(ITSM) are use cases through which chatbots gained entry into the enterprise. Proactive personalized user engagement, consistency and ease of interaction, round-the-clock availability & timely address of issues have lent themselves to operational efficiency, cost effectiveness and enhanced user experience. Chatbots integrated into ITSM help streamline service, automate workflow management, reduce MTTR, and provide always-on services. They also make it easier to scale during peak usage times since they reduce the need for customers to speak with human staff, and the need to augment human resources to handle the extra load. ChatOps is the use of chatbots within a group collaboration tool where they run between the tool and the user’s applications and automate tasks like providing relevant data/reports, scheduling meetings, emailing, and ease the collaborative process between siloed teams and processes, like in a DevOps environment where they double up as the monitoring and diagnostic tool for the IT landscape.

In E-commerce, chatbots can boost sales by taking the customer through a linear shopping experience from item search through purchase. The bot can make purchase suggestions based on customer preferences gleaned from product search patterns and order history.

In Healthcare, they seamlessly connect healthcare providers, consumers and information and ease access to each other. These bot assistants come in different forms catering to specific needs like personal health coach, companion bot to provide the much-needed conversational support for patients with Alzheimer’s, confidant and therapist for those suffering from depression, symptom-checker to provide initial diagnosis based on symptoms and enable remote text or video consultation with a doctor as required and so on.

Analytics provide insights but often not fast enough for the CXO. Decision-making becomes quicker when executives can query a chatbot to get answers, rather than drilling through a dashboard. Imagine getting immediate responses to requests like Which region in the US has had the most sales during Thanksgiving? Send out a congratulatory note to the leadership in that region. Which region has had the poorest sales? Schedule a meeting with the team there. Email me other related reports of this region. As can be seen here, chatbots work in tandem with other enterprise tools like analytics tools, calendar and email to make such fascinating forays possible.

Chatbots can handle the mundane tasks of Employee Onboarding, such as verification of mandatory documents, getting required forms filled, directing them to online new-hire training and ensuring completion.

When integrated with IoT devices, they can help in Inventory Management by sending out notifications when it’s time to restock a product, tracking shipment of new orders and alerting on arrival.

Chatbots can offer Financial Advice by recommending investment options based on transactional history, current investments or amounts idling in savings accounts, alerting customer to market impact on current portfolio and so much more.

As is evident now, the possibilities of such domain-specific chatbots are endless, and what we have seen is just a sampling of their use cases!

Choosing the Right Solution

The chatbot vendor market is crowded, making it hard for buyers to fathom where to even begin. The first step is an in-depth evaluation of the company’s unique needs, constraints, main use cases and enterprise readiness. The next big step is to decide between off-the shelf or in-house solutions. An in-house build will be an exact fit to needs, but it might be difficult to get long-term management buy-in to invest in related AI technologies, compute power, storage, ongoing maintenance and a capable data science team. Off-the-shelf solutions need a lot of scrutiny to gauge if the providers are specialists who can deliver enterprise-grade chatbots. Some important considerations:

The solution should (be);

Platform & Device Agnostic so it can be built once and deployed anywhere

Have good Integration Capabilities with tools, applications and systems in the enterprise

Robust with solid security and compliance features

Versatile to handle varied use cases

Adaptable to support future scaling

Extensible to enable additional capabilities as the solution matures, and to leverage innovation to provide advanced features such as multi-language support, face recognition, integration with VR, Blockchains, IoT devices

Have a Personality! Bots with a personality add a human-touch that can be quite a differentiator. Incorporation of soft features such as natural conversational style, tone, emotion, and a dash of humor can give an edge over the competition.

About the Author:

Priya is part of the Marketing team at GAVS. She is passionate about Technology, Indian Classical Arts, Travel and Yoga. She aspires to become a Yoga Instructor some day!

Deepfakes – Another reason for you not to believe everything you see on the internet!

Soundarya Kubendran

An episode of the latest season of the speculative fiction series, Black Mirror, explored the mounting risks of advanced technology in the entertainment industry. The episode depicted a pop star being replaced by her digital avatar. However, I wouldn’t call that speculative fiction anymore, with recent developments in technology having demonstrated the possibility of such scenarios in the near future.

The technology I was referring to is Deepfake, a portmanteau of the terms ‘deep learning’ and ‘fake’. Deepfake has been splashed across news since 2017 when an explicit video with faces of celebrities doctored onto other actors was posted online. This sparked a conversation on the internet about the dangers of Deepfake – they can be used to manipulate facts in politics, propagate fake news and harass individuals. Is this technology as dangerous as it is perceived, or does it have limitations like every other technology? To get to the bottom of this, we need to understand how it works and what sort of algorithms are used.

Deepfake is a technology that uses deep learning technology to fabricate entirely new scenes or alter existing videos. Although face-swapping has been prevalent in movies, they required skilled editors and CGI experts. For example, after the death of actor Paul Walker in 2013, the rest of the scenes were created with the help of his brothers and the VFX team. On the other hand, Deepfake uses machine learning systems to make the videos appear genuine and is usually difficult to identify by the layman. Deepfakes can be created or edited by anybody without editing skills.

Generative Adversarial Networks (GANs) are used in creating deepfake videos. GANs are a class of machine learning systems that are used for unsupervised learning. It was developed and introduced by Ian J. Goodfellow and his colleagues in 2014. GANs are made up of two competing neural network models – a generator and a discriminator – which can analyze, capture and copy the differences within a dataset. The generator creates fake videos and the discriminator detects if the generated videos are fake. The generator keeps creating fake content until the discriminator is no longer able to detect whether the created content is fake. If the dataset provided to the model is large enough, the generator can create very realistic fake content. FakeApp is one such application that can be easily downloaded by users to create deepfakes. The website

https://thispersondoesnotexist.com/ generates a newrealistic facial image of non-existent people from scratch every time the page is refreshed.

The potential of this technology is concerning. Like Peter Singer, cybersecurity and defense-focused strategist and senior fellow at New America said, “The technology can be used to make people believe something is real when it is not,”. It can be misused by political parties to manipulate the public and feed them misinformation. It can also become a weapon for online bullying and harassment by releasing doctored videos.

To raise awareness about the risks of misinformation, a video was released with Barack Obama’s face morphed onto filmmaker Jordan Peele. Fake videos of Mark Zuckerberg and Nancy Pelosi have also been doing rounds on the internet.

Deepfake technology is already on the US government’s radar. California has recently banned the use of deepfakes in politics to stop it from influencing the upcoming election. The SAG-AFTRA (Screen Actors Guild-American Federation of Television and Radio Artists), which has been at the forefront of battling these technologies, commended the governor for signing the bill. The Pentagon, through the Defense Advanced Research Projects Agency (DARPA), is working with several of the country’s biggest research institutions to combat deepfakes. DARPA’s MediFor program (Media Forensics department) awarded non-profit research group SRI International based in California, three contracts for researching new ways to automatically detect manipulated videos and deepfakes. Researchers at the University at Albany also received funding from DARPA to study deepfakes. This team found that analysing the blinks in videos could be one way to detect a deepfake from an unaltered video because there are not many photographs of celebrities blinking. Some researchers have suggested the idea of watermarking the deepfakes to avoid misleading people. But as we know, watermarks can be easily removed. Interestingly, Blockchain could be a part of the solution. Registries of authentic data are being created and stored on Blockchain. Photos and videos can be verified against these registries of data. This is particularly useful for journalists or activists to ensure the credibility of what they are sharing.

There are a few limitations of the technology behind Deepfakes, at present. Firstly, GANs require a large dataset to train the model to generate photo-realistic videos. That’s probably why politicians and celebrities are more targeted. Also, to run this model, heavy computing power is needed which can be expensive. At about $0.50 a GPU hour, it costs around $36 to build a model just for swapping person A to B and vice versa, and that doesn’t include all the bandwidth needed to get training data, and CPU and I/O to pre-process it. FakeApp uses TensorFlow, a Machine Learning framework which supports GPU-accelerated computation using NVIDIA graphics cards. Even though the application allows to train models without a GPU the process might take weeks, instead of hours.

As far as positive applications of this technology go, it can help filmmakers save a lot of money by morphing the face of popular actors onto the bodies of lesser known ones. Another interesting application which could result in a unique viewing experience would be to give the viewers a selection of actors to choose from to digitally place in a movie. However, this could put the jobs of actors at risk.

As of today, the negatives of this technology far outweigh the positives. If strict regulations are not put in place, then it may as well turn our lives into a Black Mirror episode.

References:

https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html

https://www.kdnuggets.com/2018/03/exploring-deepfakes.html

https://medium.com/twentybn/deepfake-the-good-the-bad-and-the-ugly-8b261ecf0f52

https://www.theverge.com/2019/10/7/20902884/california-deepfake-political-ban-election-2020

https://www.theverge.com/tldr/2018/4/17/17247334/ai-fake-news-video-barack-obama-jordan-peele-buzzfeed

https://www.news18.com/news/buzz/zao-a-new-chinese-ai-app-lets-you-swap-your-face-with-any-celebrity-in-8-seconds-2295115.html

https://www.geeksforgeeks.org/generative-adversarial-network-gan/

https://thispersondoesnotexist.com/

https://www.theverge.com/tldr/2019/2/15/18226005/ai-generated-fake-people-portraits-thispersondoesnotexist-stylegan

https://www.wired.com/story/wired-cartoons-week-7/

About the Author:

Soundarya is a developer at GAVS and loves exploring new technologies. Apart from work, she loves her music, memes and board games.

Demystifying Customer Centricity – A simplistic take

Vidyarth Venkateswaran

I am among those many who are intrigued by how some organizations can seamlessly surpass expectations and deliver a great customer experience. I find it especially revealing when the concept is examined in the context of organizations with an array of seemingly every-day products or services. Let us consider the case of a simple Indian restaurant that I frequented with my wife when we spent a week in Bali.

In case you are wondering why we went searching for Indian food in another country (ask again), it was the only one open to our Airbnb when we landed up there at 10 pm. We were driving from the airport, hungry and really needed a decent meal. With clear guidance from the missus about avoiding fine dining given the time they take to bring what we might order, we were looking for something simple, preferably a cuisine that reminded us of home.

We stumbled upon this small Indian restaurant close to our Airbnb that was decently crowded. It also had a decent amount of space to park our car (which we later realized is a luxury in Bali). So, we went in and ordered Palak Paneer (cottage cheese in spinach gravy) and some Jeera Rice (cumin rice) each. Outright, it was delicious! The incident that followed is really what caught my eye.

After having the palak paneer and rice, we were still hungry. But, knowing another dish each would be too much, we decided to share something. We called on our waiter, a tall, dark guy perhaps in his mid-forties. He was wearing a pale white shirt, a pair of shorts, and a faded purple towel hung on his shoulder.

“Anything else, sir?” he asked

“What other North Indian dishes do you have?” my wife asked

As he started rattling off all possible combinations of paneer, followed by some chaat (savoury Indian snack), we stopped him at Chole Bhature. For the uninitiated, it is handpicked dough made of wheat flour rolled into a thin sheet and deep-fried to golden brown perfection served with a lip-smacking chickpea gravy with a pinch of coriander (CREDITS: A fine dining menu), or as a good friend of mine describes it in layman terms, it is a poori (an Indian flatbread) the size of an inflated airbagserved with chickpea curry and a dollop of butter on it.

Once we had decided, my wife and I agreed to order for a portion. but that we would eventually split it. The waiter nodded and went inside the kitchen.

After a few minutes, he came with the Chole Bhature that was already cut in half. He had informed the chef that we’re planning to share it and requested that the bhatura be cut in half before deep-frying it. As he wasserving it, he alluded to how his grandmother always used to emphasize that the root of all arguments between couples was an unhappy stomach, which is why he did not want to give it a chance. My wife and I were both surprised and delighted with the whole experience.

The waiter understood that it would be messy to tear a full bhatura in half. So, he told the chef to cut it in half and fry it to make our lives easy. I thought it was a great example of how to nail customer experience.

An article I read recently on Forbes spoke about a study conducted under the umbrella of the American Customer Satisfaction Index. The study focused only on customers in the USA who participated in objective evaluations of the quality of goods and services purchased in America and produced by domestic and foreign firms with substantial US market shares. The results revealed a common thread in their “Claim to fame”. Here’s what I found:

  • Genuinely caring about customer outcomes makes a real difference. Remembering that all business is eventually a transaction between human beings is critical. Genuinely taking an interest in the customer’s pain points, goals, and objectives rather than focusing on the task/transaction make it real.
  • Recognizing that customer experience is not a trade-off. While firms are constantly dealing with real-world pressures of profitability and costs, the ones that believe positive customer experience is non-negotiable make their mark. They are able to inspire loyalty and almost build a fan base that tides them through thick and thin.
  • Investing in providing a positive employee experience is crucial. The famous words of Richard Branson about how happy employees make for a happy customer needs no references.

Case in Point – In Phil Knight’s Shoe Dog, so much was said about the early years of Nike. The salesmen who worked in Nike stores maintained a personal relationship with every aspiring athlete — be it someone who is part of the university running team or a professional athlete. They knew every athlete’s requirement, their upcoming races, etc. Some even send postcards to the athletes to know about their race. This was one of the reasons for star athletes to endorse Nike when they were at the peak of their careers. Nike cared and the athletes reciprocated it when they become famous.

We at GAVS, are proud of the focus and emphasis we place on customer centricity as part of our culture at the firm. Our transition from being focused on being enablers of Delivery Excellence to enablers of Customer Success as we have grown from strength to strength isa testament to what the concept means to all of us as partners at the firm. This is seen in everything we do from how we host our customers and partners at our office to the accelerators and enablers we use as part of our Customer Success Management framework in solutioning and delivery. As a GAVSian who is relatively new to the system, I am happy to see such a refreshing approach to Customer Centricity.

About the Author:

Vidyarth is an Associate Vice President at GAVS with the Customer Success team at GAVS Technologies. An ex-Accenture Strategy client engagement manager with over 8 years of Strategy and Management Consulting experience across multiple industries, he has managed projects and client accounts across USA, Europe, Singapore, Japan and Malaysia. He has led numerous engagements enabling digital transformation, operating model definition and improvement, business process and IT architecture and Supply Chain transformation for clients. In his role at GAVS, Vidyarth is currently responsible for, and drives strategic interventions that are focused on transforming GAVS’ business practices to enable best-in-class IP and solution-delivery to our clients.

Do You Have a Strategy for Leveraging Your Key Customers?

Betsy Westhafer

“It only takes 10% of a population holding an unshakeable belief to convince the rest of the population to adopt the same belief.” ~SNARC

Have you ever thought about what would happen if the top 10% of your happy customers went out and told your other customers and prospects how great your company is? Have you ever considered the impact of having your greatest advocates out in force to help tell your story?

Customer-driven growth is not just a good idea. It’s imperative in a highly dynamic and competitive market. There is no dearth of content written about “Advocacy Marketing,” and we’re talking about much more than getting testimonials. What I am suggesting is that companies develop compete strategies around the concept of Customer Leverage.

By creating a systematic and holistic approach, companies can leverage the power of their customer relationships for accelerated growth.

Take for instance, a Customer Advisory Board. In this setting, customers sit face-to-face with the executive leaders from their vendor or partner, providing insights beneficial to the host company while having the opportunity to influence the direction of a key supplier. In addition, board members get to network and share best practices with their peers while the executives from the host company build deeper trusting relationships with those in attendance. Win-win all around!

And here’s where the magic happens. Because of the nature of the advisory board, members feel compelled to help guide the host company to success, which includes advocating on their behalf. Because the relationships have been built in a confidential, transparent setting, trust is high and so they are now more open to participating in various modes of advocacy efforts. This may include co-writing a white paper, participating in a case study, sharing the stage for a panel discussion, co-hosting a webinar, or any assortment of activities that provide mutual value.

While many companies do this on an ad-hoc basis, the real winners are the organizations that have a customer leverage strategy that encompasses all aspects of customer-driven growth.

Consider this framework for a strategic approach to customer leverage:

  • Audit

Establishes a baseline from which to identify and enhance areas in which you can effectively leverage key customers; provides your Customer Leverage Score™ (CLS).

  • Advisory Board

Creates an ongoing system for leveraging customers in a strategic advisory capacity and deepens and strengthens trusting customer relationships.

  • Advocacy Programs

Leverages the strength of key relationships to develop opportunities for customers to advocate on your behalf, while at the same time, creating value for themselves.

  • Analytics

Indicates the success of the Customer Leverage Strategy by monitoring Key Performance Indicators and Return on Investment (ROI).

When approached in this manner, companies are more likely to find success in their efforts.

What Is a Customer Leverage Score™?

Much like the NPS (Net Promotor Score) measures the loyalty of a company’s customer relationships, the CLS™ measures the organization’s ability to leverage those loyal relationships.

During a CLS audit, various members of a leadership team are asked a series of questions regarding their current ability to leverage their customers. They come in the form of yes/no questions, along with a measure of consistency and effectiveness. An index score is calculated and then each area is measured in terms of perceived priority for the organization. Combining the index score with the prioritization leads to the overall Customer Leverage Score. Both quantitative and qualitative data are considered.

Categories for the audit discussions include, but are not limited to, Advocacy, Strategy, Innovation, Networking, and Internal Alignment. Each category is averaged to identify areas of strength and weakness.

Here’s an example:

Category: Strategy

Question: Do we let our customers know how valuablethey are to us by giving them an opportunity to influence our decision making and strategic direction.

Possible answer:

Yes, we have a Customer Advisory Board where our customers can influence our decision making and strategic direction. We are not consistent with having our board meetings, however, and when we do have them, they are only moderately successful. This is a high priority for us, but we just can’t seem to get the internal resources aligned to make them happen consistently and effectively.

The CLS asks various questions in the format you see above and serves as a baseline to measure the effectiveness of a customer leverage program. It’s recommended that the CLS audit be repeated on an annual basis to monitor progress.

Other Key Metrics

As with any great strategy, it’s important to have key performance indicators to ensure the strategy is securing the intended outcomes. Other key metrics for a customer leverage program may include:

  • Advocacy efforts leading to new business
  • Retention of Customer Advisory Board members
  • Account expansion among advisors and advocates
  • Others

Leveraging your key customers is undoubtedly the fastest way to win in your market and creates a significant and unique competitive advantage. There truly is no downside to utilizing your key relationships for mutual benefit.

Sources:

About the Author:

Betsy Westhafer is the CEO of The Congruity Group, a US-based consultancy focused on customer leverage programs. She is also the author of the #1 Best Seller, “ProphetAbility – The Revealing Story of Why Companies Succeed, Fail, or Bounce Back,” available on Amazon.

The shifting pH of Databases from ACID to BASE

Bargunan Somasundaram

Today it is said that data is the new oil. I will also add that data is the new gold. Industry 4.0 is focused on data. Data is now considered one of the most important commodities. ‘Big data’ has become an inevitable reality today but bigger isn’t always better, Big insights are more important than Big data. Now to extract value from gold or oil, it needs to be processed – fashioned into jewellery, minted into coins or refined to produce different petroleum products. Similarly, data must be processed and held in a vault (database or datastore). Big insights can be possible only with the right database for daily operations. An explosion of consumer data has enabled IT companies and giants to shift the pH of their databases from ACID to BASE. Let’s see how.

In the early years of computers, ‘punch cards’ were used for input, output, and data storage. Punch cards offered a fast way to enter data, and to retrieve it. After Punch cards, databases came along. Database Management Systems allowed us to organize, store, and retrieve data from a computer. It is a way of communicating with a computer’s “stored memory.” Airlines were one of the first industries that identified the need for relational databases. The SABRE system was used by IBM to help American Airlines manage its data. The datastores have started to evolve from the primitive approach of CODASYL to SQL (ACID) to NoSQL (BASE).

Transactions

The idea of transactions, their semantics, and guarantees, evolved with data management. As computers became more powerful, they were tasked with managing more data. Eventually, multiple users shared data on a machine. This led to problems of data being changed or overwritten while other users were in the middle of a calculation. This was an issue that needed addressing. Thus, the academics were called in and they came up with the ACID properties for transactions that could solve the consistency issues.

In the context of databases, a sequence of database read/write operations that satisfy the ACID properties (these can be perceived as a single logical operation on the data) is called a transaction.

To understand the importance of transactions, consider this analogy -transferring money from one account to another. This operation includes the below two steps:

  1. Deduct the balance from the sender’s bank account
  • Add the amount to the receiver’s bank account

Now think of a situation where the amount is deducted from the sender’s account but is not transferred to the receiver’s account due to some errors. Such issues are managed by the transaction management, where both steps are performed in a single unit. In the case of a failure, the transaction should be roll-backed.

Below are the basic tenets of the ACID Model, a set of guidelines for ensuring the accuracy of database transactions.

  1. Atomicity
  2. Consistency
  3. Isolation
  4. Durability

Atomicity

It is the guarantee that a series of operations either succeed or fail together, because all components of a transaction are treated as a single action. If one part of a transaction fails, the database’s state remains unchanged, there are no partial updates.

For example, a business transaction might involve confirming a shipping address, charging the customer and creating an order. If one of these steps fails, all should fail.

Consistency

Consistency is the second stage of the ACID model. A transaction either creates a new and valid state of data or, if any failure occurs, returns all data to its state before the transaction was started.

For example, a column in a database may only have the values for Days as “Monday” to “Sunday”. If a user were to introduce a new day, then the consistency rules for the database would not allow it.

Isolation

Transactions require concurrency control mechanisms, and they guarantee correctness even when being interleaved. Isolation brings us the benefit of hiding uncommitted state changes from the outside world, as failing transactions shouldn’t ever corrupt the state of the system. Isolation is achieved through concurrency control using pessimistic or optimistic locking mechanisms

Here is an example: If Bob issues a transaction against a database while Harry issues a different transaction, both transactions should operate on the database in isolation. The database should either perform Bob’s entire transaction before executing Harry’s or vice-versa. This prevents Bob’s transaction from reading intermediate data produced as a side effect of part of Harry’s transaction that will not eventually be committed to the database.

Bob’s Transaction Harry’s Transaction
Read Bob balance  
($1000)  
Deduct $100 for Read Bob’s balance
movie which is $900 not
  $1000
Update account with Add $600 to Bob’s
$900 account
  Update Bob’s account
  (total = $1500 not
  $1600)

It is important to note that the isolation property does not ensure that a specific transaction will execute first, only that they will not interfere with each other.

Durability

After the successful completion of a transaction in the system, the data remains in the correct state, even in case of a failure and system restart.

The Need for BASE Models

Let’s go through the lifecycle of an application to understand the need for the BASE model. Let’s suppose an e-commerce application is developed. At the initial soft launch, the database is moved from a local workstation to a shared, remotely hosted MySQL instance with a well-defined schema. As soon as the application becomes popular, a problem arises. There are just too many reads hitting the database.

This is quite usual with any application. The first attempt would be to cache frequently executed queries. Generally, memcached or any third-party cache providers like EHCache, OSCache, are employed

for caching. But note that the reads are no longer in compliance with the ACID model. The data is inconsistent because it is in more than one place. This also means that the cache is serving older/stale data till the time DB updates the cache.

As the application’s popularity grows, new features like faceted search, on-page check out, customer reviews, live chat, etc. are introduced. If each feature was in its table, hundreds of joins would be required to prepare such a page. This would increase query complexity. To avoid too many joins, denormalization must be done.

If the application’s popularity surges further, it will swamp the server and slow things down. Thus, server-side computations such as stored procedures must be moved to the client-side. Even after this, there would be some queries that are still slow. So, periodically the most complex queries are pre-materialized, and joins are avoided in most cases.

Now, the reads might be okay, but writes are getting slower. Thus, the secondary indexes and triggers are dropped. At this point the DB is left with:

  • No ACID properties due to caching.
  • No normalized schema due to denormalization
  • No stored procedures, triggers, and secondary indexes.

The ACID model is an overkill or would hinder the operation of the database. These issues gave birth to a softer model called BASE, which is extensively used by the NoSQL datastores.

Basic tenets of BASE model

  • Basic Availability

The datastore does guarantee availability, in the presence of multiple failures. Thus, the database appears to work most of the time because of replication.

  • Soft State.

Soft State indicates that the state of the system may change over time, even without input. This is because of the eventual consistency model. In a way, datastores don’t have to be write-consistent or mutually consistent all the time.

  • Eventual Consistency (Weak consistency)

When multiple copies of the data reside on separate servers, an update may not be immediately made to all copies simultaneously. So, the data is inconsistent for a period of time, but the database replication mechanism will eventually update all the copies of the data to be consistent.

Conclusion

Suitability of the ACID or BASE model varies case-by-case and depends on the read and write patterns. Transactions are omnipresent in today’s enterprise systems, providing data integrity even in highly concurrent environments. So, choose ACID when there is a need for strong consistency in transactions and the schema is fixed.

In the age of IoT, AI/ML, High-Performance Computing is inevitable, and the computing requirements are astronomical. Eventual consistency gives the IT giants edge over others in the industry by enabling their applications to interact with customers across the globe, continuously, with the necessary availability and partition tolerance. All this, while keeping their costs down, systems up, and their customers happy. So, go for the BASE model datastores when there’s a high priority for availability and scalability and the schema is evolving. At the same time, BASE datastores don’t offer guaranteed consistency of replicated data at write time but in the future. BASE consistency model is primarily used by aggregate stores, including column family, key-value and document stores. Hbase, SOLR, cassandra, Elastic search are based on BASE models. Every relational database such as MySQL, postgresql, oracle and Microsoft SQL, support ACID properties of transactions.

About the Author:

Bargunan is a Big Data Engineer and a programming enthusiast. His passion is to share his knowledge by writing his experiences about them. He believes “Gaining knowledge is the first step to wisdom and sharing it is the first step to humanity.”

Software Defined Networking (SDN)

Chandrasekar Balasubramanian & Suresh Ramanujam

Introduction

As per Open Networking Foundation’s definition, Software-Defined Networking is the physical separation of the network control plane from the forwarding plane and where a control plane controls several devices.

In a traditional network architecture, individual network devices make traffic decisions (control plane) and forward packets/frames from one interface to another (data plane). Thus, they have all functions and processes related to both control plane and data plane.

But in Software-Defined Networking, the control plane and data plane are decoupled. The control plane is implemented in software which helps the network administrator to manage the traffic programmatically from a centralized location. The added advantage is that individual switches in the network do not require intervention of the network administrator to deliver the network services.

Software-Defined networking (SDN), makes networks agile and flexible. It provides better network control and hence enables the cloud computing service providers to respond quickly to ever changing business requirements. In SDN, the underlying infrastructure is abstracted for applications and network services.

SDN architecture

A typical representation of SDN architecture includes three layers: the application layer, the control layer and the infrastructure layer.

The SDN application layer, not surprisingly, contains the typical network applications or functions like intrusion detection systems, load balancing or firewalls. A traditional network uses a specialized appliance, such as a firewall or load balancer, whereas a software-defined network replaces the appliance with an application that uses the controller to manage the data plane behaviour.

SDN architecture separates the network into three distinguishable layers, i.e., applications communicate with the control layer using northbound API and control layer communicates with data plane using southbound APIs. The control layer is considered as the brain of SDN. The intelligence to this layer is provided by centralized SDN controller software. This controller resides on a server and manages policies and the flow of traffic throughout the network. The physical switches in the network constitute the infrastructure layer.

How SDN works

SDN is internally an orchestration of several technologies. Network virtualization and automation using well defined APIs are the key ingredients. Functional Separation adds value by decreasing dependencies.

In a classic SDN scenario, a packet arrives at a network switch, and rules built into the switch’s proprietary firmware tell the switch where to forward the packet. These packet-handling rules are sent to the switch from the centralized controller.

The switch — also known as a data plane device — queries the controller for guidance as needed, and it provides the controller with information about traffic it handles. All the packets destined for same host are treated in a similar manner and forwarded along the same pathway by the switch.

The virtualization aspect of SDN comes into play through a virtual overlay, which is a logically separate network on top of the physical network. In order to segment the network traffic, end-to-end overlays can be implemented. Thus, users can abstract the underlying network as well. This micro-segmentation is especially useful for service providers and operators with multi-tenant cloud environments and cloud services, as they can provision a separate virtual network with specific policies for each tenant.

Network Function Virtualization (NFV) and SDN complement each other very well. NFV virtualizes network services and abstract them from dedicated hardware. Nowadays, there are plethora of physical devices which play a specialized role such as load balancer, routing, switching, WAN acceleration and content filter, etc. Service Providers consider NFV as the solution for deploying new network services by virtualizing network devices.

Some Examples of NFV

  • Virtualized Network Appliances, wherededicated network devices are replaced by virtual machines running in servers.
  • Virtualized Network Services/functions (VNFs), which virtualizes software-based networkmonitoring and management services, including traffic analysis, network monitoring and alerting, load balancing and quality or class of service handling

Benefits of SDN from networking architecture perspective

With SDN, an administrator can change any SDN based network switch’s rules when necessary — prioritizing, deprioritizing or even blocking specific types of packets with a granular level of control and security. Traffic loads are thus efficiently managed with lot of flexibility, specifically in a cloud environment where multi-tenant architecture is deployed. Essentially, this enables the administrator to use less expensive commodity switches and have more control over network traffic flow than ever before.

End-to-end visibility of the network easing network management is one of the many benefits of SDN. In order to distribute policies to all the networked switches, there is no need to configure multiple individual network devices. In this case, configuring and dealing with one centralized controller is enough. If the controller deems traffic suspicious, for example, it can reroute or drop the packets. SDN also virtualizes hardware and services that were previously carried out by dedicated hardware, resulting in the touted benefits of a reduced hardware footprint and lower operational costs.

Software-Defined Wide Area Network (SD-WAN) emerged from software defined networking using virtual overlay aspect. Connectivity links of a given organization in its WAN are abstracted to form a virtual network. The SDN controller uses any of the connections which deems fit to send and receive traffic. Let us see diagrammatically, the comparison between traditional WAN and SD-WAN.

The Business Benefits of Software-Defined Network Solutions

Dynamically changing needs of the business require programmable network, preferably centralized. SDN aptly caters to these business needs by dynamically provisioning the services in the network. It also provides the following technical and business benefits:

  • Directly Programmable: Since control layeris decoupled from infrastructure layer, its directly programmable.
  • Centralized Management: Controllersmaintains a global view of the network and thus maintains central intelligence.
  • Reduced OpEX/CapEx
  • Deliver Agility and Flexibility
  • Enable Innovation

Software defined networking will soon transform the legacy data centres into virtualized environment comprising networking, compute and storage. SDN adds flexibility in terms of controlling the network.

Software-Defined Networking Use Cases

As discussed, Software defined networking provide immense benefits as part of migration to virtual environment. SDN use cases in service provider environment with cloud computing architecture are very much effective.

Bandwidth calendaring and WAN optimization are important needs of service providers which are met by SDN. SDN also offers bandwidth-on-demand and hence carriers can have control on links to opt for additional bandwidth on an ad-hoc basis. SDN adds value to cloud computing data centres by network virtualization.

In a segregated network with multi-tenants, this is very important to achieve faster turnaround time and efficient utilization of resources in the cloud. SDN policies offer network access control and monitoring to enterprise campuses.

Conclusion

Together, SDN and NFV represent a path toward more generic network hardware and more open software. SDN with NFV is the future of networking and is becoming more and more the nucleus of modern data centre! At GAVS, we are tracking the SDN developments and adoption by various vendors and we are excited about the potential possibilities with SDN.

About the Authors:

Chandrasekar Balasubramanian:

Chandrasekar has 23 years of experience specialized in Networking. He is currently heading Networking Center of Excellence in GAVS with solid experience in Network Management, Network Security and Networking in general. He is passionate about Next Generation Networking technologies and is currently experimenting with it. He holds a couple of approved patents in Switching and Network security.

Suresh Ramanujam:

Suresh is a networking architect and part of Location Zero. He was associated with multiple global network/telecom service providers’ network transformation projects, improving network efficiency and quality of service with optimization of infrastructure (CAPEX and OPEX) by adopting breakthrough models. He is passionate about evolving networking technologies and the journey towards software defines everything.

Cleaning up our Digital Dirt

Sri Chaganty & Chandramouleswaran

Now, what exactly is digital dirt, in the context of enterprises? It is highly complex and ambiguous to precisely identify digital dirt, let alone address the related issues. Chandra Mouleswaran S, Head of Infra Services at GAVS Technologies says that not all the applications that run in an organization are actually required to run. The applications that exist, but not used by internal or external users or internal or external applications contribute to digital dirt. Such dormant applications get accumulated over time due to the uncertainty of their usage and lack of clarity in sunsetting them. They stay in the organization forever and waste resources, time and effort. Such hidden applications burden the system, hence they need to be discovered and removed to improve operational efficiency.

Are we prepared to clean the trash? The process of eliminating digital dirt can be cumbersome. We cannot fix what we do not find. So, the first step is to find them using a specialized application for discovery. Chandra further elaborated on the expectations from the ‘Discovery’ application. It should be able to detect all applications, the relationships of those applications with the rest of the environment and the users using those applications. It should give complete visibility into applications and infrastructure components to analyze the dependencies.

Shadow IT

Shadow IT, the use of technology outside the IT purview is becoming a tacitly approved aspect of most modern enterprises. As many as 71% of employees across organizations are using unsanctioned apps on devices of every shape and size, making it very difficult for IT departments to keep track. The evolution of shadow IT is a result of technology becoming simpler and the cloud offering easy connectivity to applications and storage. Because of this, people have begun to cherry-pick those things that would help them get things done easily.

Shadow IT may not start or evolve with bad intentions. But, when employees take things into their own hands, it is a huge security and compliance risk, if the sprawling shadow IT is not reined in. Gartner estimates that by next year (2020), one-third of successful attacks experienced by enterprises will be on their shadow IT resources.

The Discovery Tool

IT organizations should deploy a tool that gives complete visibility of the landscape, discovers all applications – be it single tenant or multi-tenant, single or multiple instance, native or virtually delivered, on-premise or on cloud and map the dependencies between them. That apart, the tool should also indicate the activities on those applications by showing the users who access them and the response times in real-time. The dependency map along with user transactions captured over time will paint a very clear picture for IT Managers and might bring to light some applications and their dependencies, that they probably never knew existed!

Discover, is a component of GAVS’ AIOps Platform,Zero Incident Framework™ (ZIF). Discover can work as a stand-alone component and also cohesively with the rest of the AIOps Platform. Discover provides Application Auto Discovery and Dependency Mapping (ADDM). It automatically discovers and maps the applications and topology of the end to end deployment, hop by hop. Some of its key features are:

  • Zero Configuration

The auto-discovery features require no additional configuration upon installation.

  • Discovers Applications

It uniquely and automatically discovers all Windows and Linux application in your environment, identifies it by name, and measures the end-to-end and hop-by-hop response time and throughput of each application. This works for applications installed on physical servers, in virtualized guest operating systems, applications automatically provisioned in private or hybrid clouds, and those running in public clouds. It also works irrespective of whether the application was custom developed or purchased.

  • Discovers Multitenant Applications

It auto-discovers multitenant applications hosted on web servers and does not limit the discovery to the logical server level.

  • Discovers Multiple Instances of Application

It auto-discovers multiple instances of the same application and presents them all as a group with the ability to drill down to the details of each instance of the application.

  • Discovers SaaS Applications

It auto-discovers any requests directed to SaaS applications such as Office 365 or Salesforce and calculates response time and throughput to these applications from the enterprise.

  • Discovers Virtually Delivered Applications or Desktops

It automatically maps the topology of the delivered applications and VDIs, hop-by-hop and end-to-end. It provides extensive support for Citrix delivered applications or desktops. This visibility extends beyond the Citrix farm into the back-end infrastructure on which the delivered applications and VDIs are supported.

  • Discovers Application Workload Topologies

The architecture auto-discovers application flow mapping topology and user response times to create the application topology and update it in near real-time — all without user configuration. This significantly reduces the resources required to configure service models and operate the product.

  • Discovers Every Tier of Every Multi-Tiered Application

It auto-discovers the different tiers of every multi-tiered application and displays the performance of each tier. Each tier is discovered and named with the transactional throughput and response times shown for each tier.

  • Discovers All Users of All Applications

It identifies each user of every application and the response time that the user experiences for each use of a given application.

  • Discovers Anomalies with Applications

The module uses a sophisticated anomaly detection algorithm to automatically assess when a response time excursion is valid, then if a response exceeds normal baseline or SLA performance expectations, deep diagnostics are triggered to analyze the event. In addition, the hop-by-hop segment latency is compared against the historical norms to identify deterministically which segment has extended latency and reduced application performance.

For more detailed information on GAVS’ Discover, or to request a demo please visit

About the Authors:

Chandra Mouleswaran S:

Chandra heads the IMS practice at GAVS. He has around 25+ years of rich experience in IT Infrastructure Management, enterprise applications design & development and incubation of new products / services in various industries. He has also created a patent for a mistake proofing application called ‘Advanced Command Interface”. He thinks ahead and his implementation of ‘disk based backup using SAN replication’ in one of his previous organizations as early as in 2005 is a proof of his visionary skills.

Sri Chaganty:

Sri is a Serial Entrepreneur with over 30 years’ experience delivering creative, client-centric, value-driven solutions for bootstrapped and venture-backed startups.