Going through or growing through

Jinka Sai Jagadeesh

“Growing through life – and not just going through life” is the best decision that you can make to live a worthy life; a life to be cherished and remembered.

This mantra plays a significant role in our work lives as well. Many of us just aim to complete our tasks at work and get on with our lives. We are not driven by a sense of purpose at work and don’t usually go beyond our call of duty. There is hardly any “out-of-the-box” thinking.

Studies state that Millennials (those born between the early 80s and the mid-90s) are the largest segment in the workforce and Gen Z (those born after mid to late 90s) is booming yet.

At the workplace, Millennials and Gen Z have different expectations, priorities and needs than their previous generations. They want to work for people who will inspire them to do great work. They also want to work with peers and not for authority figures, they are up for collaboration and want a fun work culture.

So, how are organizations enabling the success of such a workforce? Are they doing enough to ensure that not only their customers, but also their employees are satisfied?

It is said that, “A great culture enables success, builds team fabric and attracts talent too. We have all seen many talented teams failing simply because of a poor culture and human dynamics.”

Below are a few insights that I have gained from speeches, articles and books on work culture, teams and employee engagement:

  • Building strong teams that are focused on collaboration is a key element of success. A team that takes ownership of their contribution and how they work together, will have a strong shared vision and will continuously search for ways to improve. Do not underestimate the importance of building a great team culture.
  • In the present times, people cannot be simply ‘roped into’ the team. They must ‘opt-in’. The vision of the team must be well articulated and communicated. The path to achieve the goals must also be agreed upon by all the members. Even a single uncommitted member can bring down the morale of the entire team.
  • Provisioning roles with clear vision and cause like, “If people are subordinates, what are they subordinating to?” In my view, people are never subordinates to other people. They are subordinates to a cause. In that sense, even a leader is a subordinate to a cause.
  • Treat your team members like they matter and are not simply resources for the company to utilize. Find ways to foster their self-esteem, ambition, independence, and desire for growth. This will lead to a better understanding of decisions, increased participation in meetings, thoughtful contribution in decision making, and a stronger sense of community.
  • As Tom Peters says, “Attitude > Ability”
  • Skills without right attitude doesn’t move a needle. Never look just for capabilities. Attitude is as important as capabilities. In fact, with the right attitude, a team member can build the required capabilities. In the long run, the ones with the right attitude are the ones that can be relied upon. Many eminent leaders have harped on this very fact across various speeches and interviews
  • Embrace diversity. Diversity is the key to an innovative team. If everyone belongs to a similar background or have similar thought processes, how will the team think differently? How will they look at things with a new set of lenses? How will they challenge the status-quo? Celebrate these outliers, for they are the ones who will help you grow! No wonder more and more organizations are introducing a diversity quota for hiring.
  • Micromanagement Vs Trust. This topic has been discussed and debated a lot. The answer almost always favours Trust. People believe, Trust is the currency for eliciting excellence. It is simple: people only do their best work when they are trusted. With traditional ‘command-and-control’, people will comply at the best. With trust and empowerment, they will exceed the expectations.

“Processes without results are a waste. Results without processes are not sustainable.”

  • Mentors can help find ways to stimulate our growth, both personally and professionally. They are trusted advisers that can help us find the right career path.

“Awake, arise and learn by approaching the excellent ones,

The path to success is difficult and risky, it is like walking on the razor’s edge”

No task is impossible for a competent person,

No place is distant for a business person, No land is foreign for educated persons, No one is stranger for persons with good communication abilities.

Growing through the process will be successful if you support others to Grow.

“The point of leading is not to cross the finish line first; it’s to take people across the finish line with you.”

References:

  1. https://andthenwesaved.com/dont-go-through-life-grow-through-life/
  2. https://www.academia.edu/34695453/ Conversations_on_the_Remaking_of_Managers
  3. https://www.azquotes.com/quote/1062492
  4. http://qaspire.com/e/2/?s=When+does+learning
  5. +happen&submit=Searchpage/2/?s=When+does+ learning+happen&submit=Search

About the Author:

Jagadeesh likes to bring to spotlight the matters that help in organizational growth and those that motivate people to perform better at their workplace.

He swears by the mantra, “Change is the ONLY constant.”

Essentials of a great place

Padmavathy Ravichanran

While flexible hours, gym time and vacation days are perks, a great place is achieved by doing a few practices that instil trust authentically and consistently.

In today’s digital landscape, leaders are inward, outward and forward-looking.

A great workplace is a place where every team member thinks and behaves like a leader, and where you achieve your organizational objectives with people who give their personal best and work together as a team, all in an environment of trust! As Satya Nadella says, “I know-it-all” to “I will learn-it-all”. Learning from inside and outside the organization are both profound.

Mindfulness and reflection facilitate sharper focus and better sensing. Through self – awareness we can address critical questions of, “What could I change to evolve better” and “how can I add more value” by interpreting cognitions, emotions, and reactions.

Here are a few practices to transform one’s workgroup into a great place.

Listening

Proactively solicit suggestions to encourage and incorporate creativity. To develop a personal, connect with team members.

Listening is essential to building trust. Listening to your team members enables you to discover their strengths and challenges, and continuous listening helps build authenticity.

GAVS believes in Respect for individuals, along with instilling trust, through channels of creating Empathy. To make this happen, GAVS has built sustainable tools and forums that encourage listening in the work environment.

As a culture, GAVS follows the open-door policy, and not hierarchy-driven when it comes to reaching out to the right people for the right answers. Each one of us is empowered to share perspectives, facts and solicit feedback that impact them. From GAVS voice to HR forums, from pulse checks to quarterly town halls, from ideation wall to helpdesks, listening is a crucial practice at GAVS.

Thanking

A great place cultivates a “climate of appreciation” by sincerely recognizing good work and extra effort frequently and in unexpected ways. To ensure every person is appreciated and recognized for even the smallest contribution to ease someone’s work. Self-esteem plays a vital role in performance-oriented organization culture and it is a key driving force that motivates in the longer run. GAVS as an organization has imbibed the importance of this and made appreciation/recognition as part of its core cultural practice instilling a sense of purpose in everyone; one of the core beliefs at GAVS.

On-time appreciation and recognition happen at various stages and varying degrees starting from a simple pat on the back to a prestigious Star Performer award during the Town Hall, depending on the magnitude of the achievement and contribution.

The simplest yet effective mode of recognition is an instant appreciation a team member receives from his colleagues/manager in a daily huddle meet, thus promoting a sense of pride and respect. From thank you letter to families to an ice-cream treat for a perfect CSAT, from team lunch and dinners to gift vouchers and GPoints, from Wall of Appreciation/Wall of Fame to spot recognition, from Thank you notes from CXOs to long service awards, GAVS believes that – Appreciation is a wonderful thing. It makes what is excellent in others belong to us as well.

Caring

Empathy is one of the core values which GAVS very firmly believes and stands by. As much as we celebrate the success of GAVSians we share the pain of our colleagues during a personal crisis. At GAVS employee well-being is by embracing the individual, in totality. Whether it’s providing a sumptuous lunch, so we don’t have to worry about meal planning or a workstation yoga for our promoting health at work.through essential learning hours or training channels to learning best practices from one other.

From wellness programs like meditation sessions to team lunch and dinner, from a wellness lounge to providing healthy snacks, from health awareness camps to winding off playing table tennis to carom, from health coverage to bringing kids to work, we care for one another. We never miss asking the colleague who is back from a sick day off on how we could help him. A healthier, more motivated workforce is a happier, more productive workforce.

Developing

A great place doesn’t have complicated frameworks and models to drive employee experience. The employee experience is an aggregate of the thousands of short, transient interactions that each employee, experiences every week between themselves, the processes they follow, the technology they use, and the interaction with the peers and managers.

We are part of an industry where skillsets get outdated fast. To have a competitive edge in such an environment we must have the right people with the right mindset, and the right skillset. A learning culture is the one that nurtures talent and fosters competent and skilled resources to excel and pursue organizational strategic goals and the endeavors of customers.

Skills and competencies do contribute to the career progression, but a major part of success is the “how” and the “how well” of what one does in terms of performance and results. From onboarding right to providing learning interventions through a learning journey – either through essential learning hours or training channels to learning best practices from one other.

A great place doesn’t have complicated frameworks and models to drive employee experience. The employee experience is an aggregate of the thousands of short, transient interactions that each employee, experiences every week between themselves, the processes they follow, the technology they use, and the interaction with the peers and managers.

Each of us can contribute to a great place by

  • Having a high level of self-awareness
  • Learning, growing and seizing opportunities
  • Actively seeking feedback and respond positively to it
  • Having pride in our contribution to the mission
  • Collaborating with our colleagues
  • Integrating the practice of listening, thanking, caring and developing to instill more trust
  • Managing your personal brand

As a member of a great place, it is everyone’s responsibility to focus on the full circle and not the pieces alone as every situation may be unique, but a focused and purposeful employee holds the key to competitive advantage for the organization.

About the Author:

Padma’s Clifton Strength Finder Top 5 Signature Themes are Consistency, Discipline, Developer, Empathy, and Harmony. She is part of the HR team, and enjoys listening to her audio books, journaling, and practicing yoga.

AI for Healthcare

Bindu Vijayan

There is so much work going on in the field of Artificial Intelligence and that makes a layperson like me worry about ‘bewildered’ machines. Weird as that may sound to you cool technologists, I am the eternal skeptic, though always in awe of science, constantly looking to know where it takes us, but when it comes to literally stockpiling efficiency like we are training machines to do, my awe for the subject turns to a perpetual state of wonder.

Amongst everything that AI can do and is learning to master, its role in Healthcare truly amazes me with its broad spectrum of application. There’s a lot of potential to improve patient care through AI; studies have proven that algorithms have been seen to outsmart or perform better than pathologists with the significant super efficiency they have shown in performance or accuracy, which is good for patient care. For humans, some of those tasks are very tedious and time-consuming, and it would save time for the specialists, and allow them to focus on more high-level intellectual tasks like, synthesizing diagnostic information rather than, say, looking for that one thing in a petri dish or a glass slide.

Today, AI is used across a broad spectrum, right from diagnosis, aspects of surgery, planning treatment protocols, medication, aftercare, medical signal and image processing, and so on.

The Data – With Healthcare systems having animperative to do something with the data to improve the quality and the value of the care that they provide and the availability of cheap enough computers to actually use those data within specific fields have an enormous opportunity in AI.

AI systems need to be trained with the data that is generated from clinical processes like screening, diagnosis, treatment decisions and protocols for the system to learn about the subject and the outcomes that it would be responsible to generate. The data comes from electronic recordings, medical notes, prescriptions, physical examinations, images, and laboratory notes.

Devices – AI Devices in healthcare largely fall intotwo broad categories – Machine Learning techniques to analyze structured like data from EP, imaging, etc., ML work to provide probabilities of diseases and conditions, outcomes by attempting to cluster patient traits. Natural Language Processing (NLP) is part of the second category that extracts details from unstructured data like medical journals, clinical notes and so on which supplements the structured data. NLP also works to convert texts to structured data that can be understood by machines.

Man vs Machine – Analysing highly complex medical data through ML algorithms, creating logic and arriving at conclusions that must emulate human cognition has given AI its super status in modern medicine. AI can bring screenings and precise diagnostics to less-developed/rural areas where medical professionals are not available. It is fascinating to read, how in Radiology, a deep-learning-based algorithm was developed using more than 50,000 normal chest images and almost 7,000 scans with active TB. The algorithm is reputed to have become so good that in performance tests it easily beat radiologists.

In Dermatology, a deep learning Convolutional Neural Network (CNN) has proven to be more efficient than dermatologists at diagnosing skin cancer. Researchers worked with the algorithm exposing it to over 100,000 images of malignant melanomas and non-malignant moles. The study reports that CNN was better at diagnosing right when compared to the diagnosis by 58 dermatologists from 17 countries. “The CNN missed fewer melanomas, meaning it had a higher sensitivity than the dermatologists, and it misdiagnosed fewer benign moles as malignant melanoma, which means it had a higher specificity; this would result in less unnecessary surgery,” – Professor Haenssle.

In Oncology, AI is expected to crack the code of personalized treatment through ML. They expect to be able to establish an intuitive method of sorting through all the data. They are also using ML to help study how cancers develop, tumours progress, and even working on programming cells to fight cancer and other fatal diseases. So, the study of biological processes as well as diagnosis, treatment, and prevention are being augmented by AI.

Cardiovascular disease is one of the main causes of disability and death, globally. Early detection and treatment are critical for management as well as treatment, and AI can be the game-changer here. AI-based predictions and deep learning can help identify risk factors through retinal images that can be done cheaply and most important non-invasively. Google’s health-tech arm ‘Verily’ has evolved a method using ML, to assess one’s risk of heart disease. Scans of patients’ eyes are analyzed, and the software is reportedly very accurate in deducing data like if the patient is a smoker, the patient’s age, blood pressure, etc., which is used to predict their risk for cardiac events.

In the US, waste in healthcare is estimated to be an astounding $765 billion annually. Routine tests and unneeded tests make up a large part of this, and these tests can be significantly expensive for individuals with no treatment outcomes. AI can help such situations by reducing the number of tests that a patient needs to go through as it runs with entire data of patient information across various healthcare systems to predict based on the individual’s medical history and symptoms. Armed with information like that physicians can reduce/minimize the number of tests, save cost and time. With mobile devices integrated into the hospital workflow, AI-driven decision support provides physicians even minute data that they might have missed. All this helps physicians make a much more informed decision and save their patients’ money and discomfort.

AI, of course, has its limitations too. Studies done in clinical labs prove that algorithms can be precise with very specific tasks while actual healthcare facilities are so much more.

If we really look at what tasks they are asking AI models to perform versus the comprehensive things that an average specialist in medicine, using these models require an additional level of technology and infrastructure and it takes time to learn how to do completely digital diagnosis with an AI model. So, until we can seamlessly incorporate that level of technology into current workflows, it could pose a barrier to the widespread adoption of AI in several areas of Medicine.

And if you think about it, AI does have the potential to increase inequality in healthcare – societies that have access to medical AI versus those that don’t. We already have so many healthcare disparities globally, AI can increase that gap. Or, what if someone comes up with a general adversarial network? All the artifacts and the image acquisitions can be used to cause the model to evolve confident about a wrong diagnosis.

Having said that, AI has revolutionized primary care, in-home care, and we probably would be able to do away with the long-term acute care facilities if we can come up with the right ways to care for people in their homes.

We should also be thinking in terms of training the next generation of physicians who can utilize advanced AI in ways that enable us to deliver better care to our patients. It might call for cross-training, like training physicians in data literacy to be part of medicine’s core curriculum rather than an optional subject.

With everything that AI can do for Medicine today, I still do believe that Medicine is inherently a human enterprise and empathy and caring for another person is not something that an algorithm can reproduce…

References

  1. https://www.researchgate.net/ publication/336264780_The_impact_of_artificial_ intelligence_in_medicine_on_the_future_role_of_ the_physician
  2. https://www.bernardmarr.com/defaultasp?contentID=1542
  3. https://www.hindawi.com/journals/jhe/si/618674/cfp/
  4. https://www.h2o.ai/healthcare/?gclid=EAIaIQobChMI3pnj5Ku65gIVBZWPCh1rOw0XEAAYASAAE gL-YvD_BwE
  5. https://in.pinterest.com/pin/725501821200722067/?lp=true
  6. https://www.forbes.com/sites/ razvancreanga/2019/03/04/data-governance-ai-and-healthcare-an-exciting-new-world-of-health-provision/#5c3f14b3a44b

About the Author:

Bindu Vijayan is a Sr. Manager at GAVS, a true-blue GAVSian, having spent 8 years with the company. She is an avid reader, loves music, poetry, traveling, yoga and meditation, and admits she is entirely influenced by Kafka’s perspective, “Don’t bend; don’t water it down; don’t try to make it logical; don’t edit your own soul according to the fashion. Rather, follow your most intense obsessions mercilessly.”

Evolution of speech recognition

Naveen KT

Speaking with inanimate objects and getting work done through them has transitioned from being a figment of our imagination to a reality. Case in point, personal assistant devices like Alexa can recognize our words, interpret the meaning and carry out commands.

The journey of speech recognition technology has been nothing short of a rollercoaster ride. Let us look at the developments that enabled commercialization of ASR and what these systems could accomplish, long before any of us had heard of Siri or Google Assistant.

The speech recognition field was propelled by both the application of different approaches and the advancement of technology. Over a decade, researchers would conceive of myriad ways to dissect language: by sounds, by structure and with statistics.

Early Days

Even though human interest in recognizing and synthesizing speech goes back centuries, it was only in the last century that something recognizable as ASR was built. The ‘digit recognizer’ named Audrey, by Bell Laboratories was among the first projects. It could identify spoken numbers by looking for audio fingerprints called formants, the distilled essences of sounds.

Even though human interest in recognizing and synthesizing speech goes back centuries, it was only in the last century that something recognizable as ASR was built. The ‘digit recognizer’ named Audrey, by Bell Laboratories was among the first projects. It could identify spoken numbers by looking for audio fingerprints called formants, the distilled essences of sounds.

Next came the Shoebox in the 1960s. Developed by IBM, the Shoebox could recognize numbers and arithmetic commands (like ‘plus’ and ‘total’). Shoebox could also pass on the math problem to an adding machine, to calculate and print the answer.

Half way across the world, in Japan, hardware was being built that could recognize the constituent parts of speech like vowels. Systems were also being built to evaluate the structure of speech to figure out where a word might end.

A team at University College in England had devised a system that could recognize 4 vowels and 9 consonants by analysing phonemes, the discrete sounds of a language.

However, these were all disjointed efforts and were lacking direction.

In a surprising turn of events, the funding for ASR programs in Bell Laboratories were stopped in 1969. The reasons cited were “lack of scientific rigor” in the field and “too much wild experimentation”. It was reinstated in 1971.

In the early 1970s, the U.S. Department of Defence’s ARPA (the agency now known as DARPA) funded a five-year program called Speech Understanding Research. Several ASR systems were created and the most successful one Harpy (by Carnegie Mellon University), could recognize over 1000 words. Efforts to commercialize the technology had also picked up speed. IBM was working on speech transcription in the context of office correspondence, and Bell Laboratories on ‘command and control’ scenarios.

The key turning point was the popularization of Hidden Markov Models (HMMs). These models used a statistical approach that translated to a leap forward in accuracy. Soon, ASR field began coalescing around a set of tests that provided a benchmark to compare to. This was further encouraged by the release of shared data sets that researchers could use to train and test their models on.

ASR as we know it today, was introduced in the 1990s. Dragon Dictate launched in 1990 for a staggering $9,000, with a dictionary of 80,000 words and features like natural language processing.

These tools were time-consuming and it required that users speak in a tilted manner; Dragon could initially recognize only 30–40 words a minute; people typically talk around four times faster than that. By 1997, they introduced Dragon NaturallySpeaking, which could capture words at a more fluid pace and at a much lower price tag of $150.

Current Landscape

Voice has been touted as the future. Tech giants are investing in it and placing voice-enabled devices at the core of their business strategy.

Machine learning has been behind major breakthroughs in the field of speech recognition. Google’s efforts in this field culminated in the introduction of Google Voice Search app in 2008. They further refined this technology, with the help of huge volumes of training data and finally launched the Google Assistant.

Digital assistants like Google Assistant, Siri, Alexa and others, are changing the way people interact with their devices. Digital assistants are intended to assist individuals with performing or completing fundamental assignments and react to enquiries.

With the capacity to retrieve data from a wide variety of sources, these robots help take care of issues progressively, upgrading the UX and human productivity.

Popular Voice assistants include:

  • Amazon’s Alexa
  • Apple’s Siri
  • Google’s Google Assistant
  • Microsoft’s Cortana

Application of Speech Recognition Technology

Speech recognition technology and the use of digital has moved rapidly from our phones to our homes, and its application in ventures, for example, business, banking, advertising, and health care is rapidly becoming obvious.

In Workplace: Speech recognition technology in the work environment has been a push to increase productivity and efficiency. Examples of office tasks digital assistants are, or will be, able to perform:

  • Search for documents or reports on computer
  • Create tables or graphs using data
  • Answer queries
  • On-request document printing
  • Record minutes
  • Perform other routine tasks like scheduling meetings and making travel arrangements

In Banking: Theaim of Speech Recognition, in Banking

  • Financial industries is to reduce friction for the customer. Voice-enacted banking could diminish the requirement for human client assistance and lower employee costs. A customized financial partner could consequently help consumer loyalty and satisfaction.

How speech recognition can improve banking:

  • Request financial information
  • Make payments
  • Receive information about your transaction history

In Marketing: Voice-search can and will cause shifts in consumer behaviour. It is essential to understand such shifts and tweak the marketing activities to keep up with the times.

  • With speech recognition, there will be another type of information accessible for advertisers to examine. People’s accents, speech patterns, and vocabulary can be utilized to translate a purchaser’s area, age, and other data with respect to their socioeconomics, for example, their social alliance.
  • Speaking allows for longer, more conversational searches. Advertisers and optimisers may need to concentrate on long-tail keywords and on creating conversational substances to remain in front of these patterns.

In HealthCare: In a situation where seconds are critical and clean working conditions are essential, hands-free, prompt access to data can have a positive effect on medical efficiency.

Benefits include:

  • Quick looking up of information from medical records
  • Less paperwork
  • Reduced time on inputting data
  • Improved workflow

This is just scratching the surface of the applications of this technology. The future of speech recognition technology holds a lot of promise across various industries.

References:

  1. https://www.getsmarter.com/blog/market-trends/applications-of-speech-recognition
  2. https://medium.com/swlh/the-past-present-and-future-of-speech-recognition-technology-cf13c179aaf
  3. https://www.globalme.net/blog/the-present-future-of-speech-recognition
  4. https://bit.ly/347MAYw
  5. https://bit.ly/2Oq9VOC
  6. https://bit.ly/2OtFp6t
  7. https://bit.ly/2pB6hcr
  8. https://bit.ly/2QAYR43

About the Author:

Naveen is a software developer at GAVS. He teaches underprivileged children and is interested in giving back to society in as many ways as he can. He is also interested in dancing, painting, playing keyboard and is a district-level handball player.

The Cap on Choosing the Right Distributed Databases

Bargunan Somasundaram

Amazon found that every 100 milliseconds of latency, cost them 1% in sales. The application users, customers, and website visitors make an instant judgement about the application and the business. If the application is fast, a strong first impression is made. It’s a win for user experience.

“If it’s fast, it must be professional!”

It is human psychology to consider faster applications to be more reliable. We relate speed to efficiency, trust, and confidence. On the other hand, a slow application or website makes us think it’s unsafe, insecure, and untrustworthy. And it’s difficult to turn around that negative first impression.

How does Google return search results so quickly? How is Facebook so fast even with 1.35 billion users? How can large eCommerce sites like Amazon and Flipkart serve fast despite huge influx of online traffic on a festive day? How are Amazon and Flipkart rolling in money by upselling, cross-selling and down selling?

Is it magic?

No, it’s Distributed Computing. The terms “Concurrent Computing”, “ParallelComputing”, and “Distributed Computing” overlapand Distributed Computing has lots of computing paradigms, like Cloud Computing, Grid Computing, Cluster Computing with the latest being Edge Computing.

Distributed Systems for Distributed Computing

To support all those types of computing, a robust, distributed database is a prerequisite. The distributed databases, mostly NoSQL, come in a wide variety of data models, including key-value, document, columnar and graph formats; Apache Hbase, Cassandra, Redis, MongoDB, Elastic Search, Solr, Neo4j to name a few. However, to effectively pick the tool of choice, a basic idea of the CAP Theorem(Brewer’s Theorem) is essential.

CAP Theorem states that a distributed system cannot be strictly consistent, highly available and fault tolerant at the same time. The system designers MUST choose at most two out of three guarantees in the system.

There is no silver bullet. “One data store to have them all (Consistency, Availability and Partition Tolerance)” is something that Lord of the Rings fans would understand quickly.

CAP Theorem is very important in the Big Data world, when we need to choose if the system needs to be highly consistent or highly available under a network partition.

  1. Consistency
  2. Availability
  3. Partition Tolerance

Consistency

All the users or clients have the same view of data, irrespective of any update or deletion. If there are multiple replicas and there is an update being processed, all users see the update go live at the same time even if they are reading from different replicas. Systems that do not guarantee immediate consistency may provide eventual consistency.

For example, they may guarantee that any update will propagate to all replicas in a certain amount of time. Until that deadline is reached, some queries may receive the new data while others will receive older, out-of-date answers. This is called eventual consistency.

Immediate consistency is not always important. Take for instance, a socio-professional platform like LinkedIn that shows the connections count for each user. The connections count is displayed in user’s own profile and in other network suggestions as mutual connections, albeit the counts differ for the mutual and actual connection. Consider that the connections database is replicated in the United States, Europe, and Asia. When a user in India gets 10 connections and that change takes a few minutes to propagate to the United States and Europe replicas. This may be enough for such a system because an accurate connections and mutual connections count is not always essential. If a user in the United States and one in Europe were talking on the phone as one was expanding connections, the other user would see the update seconds later and that would be okay. If the update took minutes due to network congestion or hours due to a network outage, the delay would still not be a terrible thing.

Now imagine a banking application built on this system. A person in the United States and another in India could coordinate their actions to withdraw money from the same account at the same time. The ATM that each person uses would query its nearest database replica, which would claim the money

is available and may be withdrawn. If the updates propagated slowly, both of them would have the cash before the bank realized the money was already gone. Here immediate consistency is necessary.

1.   Availability

A guarantee that every request receives a response about whether it was successful or failed. Whether you want to read or write you will get some response back. i.e. the system continues to work and serve data despite node failures. This is achieved by using many replicas to store data such that clients always have access to at least one working replica guarantees availability.

For example, a user in LinkedIn might try accessing a resource like shared post or video during peak time. Now due to overloading the LinkedIn will reply to requests with an error code “try again later.” Being told this immediately is more favorable than having to wait minutes or hours before one gives up.

2.   Partition Tolerance

Partition tolerance means that the system will continue operate even if any number of messages sent between nodes is lost. A single node failure should not cause the entire system to collapse. A three-legged cat is partition tolerant. If it was a horse, we would have to put it out of misery.

In the above LinkedIn scenario, the site continues to operate even if the node in United States goes down or loses communication to other nodes in Asia and Europe.

The CAP Tradeoff

  1. CA (Consistency and Availability) – Non-distributed system

The systems which retains consistency and availability sacrificing partition tolerance cannot be a distributed system. Mostly traditional relational databases like Oracle, MySQL, and PostgreSQL are consistent and available (CA). They renounce partition tolerance hence they can only be scaled up not scaled out. They use transactions and other database techniques to assure that updates are atomic. They propagate completely or not

at all. Thus, they guarantee all users will see the same state at the same time. Banking and finance applications require the data to be consistent and available.

  • AP (Availability and Partition Tolerance) – True Distributed system

All distributed systems must retain partition tolerance. AP based systems trade off consistency for availability. This means that they cannot guarantee consistency in the data between nodes. Distributed NoSQL datastores like Dynamo from Amazon, Cassandra, CouchDB, and Riak adopt this AP based datastores allows users to write data to one node of the database without waiting for other nodes to come into agreement, preferring the availability over immediate consistency.

  • CP (Consistency and Partition Tolerance) –

True Distributed system

AP based distributed system gives up availability and prefers consistency. This means that the data is consistent between all the nodes and the system may not be fully available in case of a node going down.

For any read or write into the CP based datastores, first all the nodes must come into agreement. So, full availability takes a backseat, giving way to strong consistency.

When to opt for what?

Choose CP-based database system, when it is critical that all users need a consistent view of the data in their application more than availability. Again, CP systems are not completely available but strongly consistent.

Choose AP-based database system, when there is always a requirement that the applications could sacrifice data consistency in return of huge performance. Again, AP based systems are not immediately consistent, they guarantee data reconciliation at a little later time with eventual consistency in place.

In a nutshell, choosing between Consistency and Availability is a software trade off,

  • Choose Consistency over Availability when the business requirements dictate atomic reads and writes.
  • Choose Availability over Consistency when the business requirements allow for some flexibility, to synchronize data with some acceptable delay.

Conclusion

Given the astronomical level of computation requirements today, scaling up is obsolete; scaling out is the only optimum solution. Distributed systems (horizontally scalable) enable us to achieve those levels of computing power and availability that were simply not possible in the past. The distributed datastores have higher performance, lower latency, and near 100% up-time in data centers that span the entire globe. Distributed systems are more complex than their single-network counterparts. Understanding the complexity incurred in distributed systems, making the appropriate trade-offs for the task at hand (CAP), and selecting the right tool for the job is necessary with horizontal scaling.

About the Author:

Bargunan is a Big Data Engineer and a programming enthusiast. His passion is to share his knowledge by writing his experiences about them. He believes “Gaining knowledge is the first step to wisdom and sharing it is the first step to humanity.”

Monitoring for Success

Suresh Kumar Ramasamy

Do you know if your end-users are happy?

(In the context of users of Applications (desktop, web or cloud-based), Services, Servers and components of IT environment, directly or indirectly.)

The question may sound trivial, but it has a significant impact on the success of a company. The user experience is a journey, from the time they use the application or service, till after they complete the interaction. Experience can be determined based on factors like Speed, Performance, Flawlessness, Ease of use, Security, Resolution time, among others. Hence, monitoring the ‘Wow’ & ‘Woe’ moments of the users is vital. Monitor is a component of GAVS’ AIOps Platform, Zero Incident FrameworkTM (ZIF).One of the key objectives of the Monitor platform is to measure and improve end-user experience. This component monitors all the layers (includes but not limited to application, database, server, APIs, end-points, and network devices) in real-time that are involved in the user experience. Ultimately, this helps to drive the environment towards Zero Incidents.

Key Features of ZIF Monitor are,

  • Unified solution for all IT environment monitoring needs: The platform covers the end-to-end monitoring of an IT landscape. The key focus is to ensure all verticals of IT are brought under thorough monitoring. The deeper the monitoring, the closer an organization is to attaining a Zero Incident EnterpriseTM.
  • Agents with self-intelligence: The intelligent agents capture various health parameters about the environment. When the target environment is already running under low resource, the agent will not task it with more load. It will collect the health-related metrics and communicate through the telemetry channel efficiently and effectively. The intelligence is applied in terms of parameters to be collected, the period of collection and many more.
  • Depth of monitoring: The core strength of Monitor is it comes with a list of performance counters which are defined by SMEs across all layers of the IT environment. This is a key differentiator; the monitoring parameters can be dynamically configured for the target environment. Parameters can be added or removed on a need basis.
  • Agent & Agentless (Remote): The customers can choose from Agent & Agentless options for the solutions. The remote solution is called as Centralized Remote Monitoring Solution (CRMS). Each monitoring parameter can be remotely controlled and defined from the CRMS. Even the agents that are running in the target environment can be controlled from the server console.
  • Compliance: Plays a key role in terms of the compliance of the environment. Compliance ranges from ensuring the availability of necessary services and processes in the target environment and defines the standard of what Application, Make, Version, Provider, Size, etc. that are allowed in the target environment.
  • Auto discovery: Monitor can auto-discover the newer elements (servers, endpoints, databases, devices, etc.) that are getting added to the environment. It can automatically add those newer elements into the purview of monitoring.
  • Auto scale: Centralized Remote Monitoring Solution (CRMS) can auto-scale on its own when newer elements are added for monitoring through auto-discovery. The auto scale includes various aspects, like load on channel, load on individual polling engine, and load on each agentless solution.
  • Real-time user & Synthetic Monitoring: Real-time user monitoring is to monitor the environment when the user is active. Synthetic monitoring is through simulated techniques. It doesn’t wait for the user to make a transaction or use the system. Instead, it simulates the scenario and provides insights to make decision proactively.
  • Availability & status of devices connected: Monitor also includes the monitoring of availability and control of USB and COM port devices that are connected.
  • Black box monitoring: It is not always possible to instrument the application to get insights. Hence, the Black Box technique is used. Here the application is treated as a black box and it is monitored in terms of its interaction with the Kernel & OS through performance counters.

High level overview of Monitor’s components,

  • Agents, Agentless: These are the means through which monitoring is done at the target environment, like user devices, servers, network devices, load balancers, virtualized environment, API layers, databases, replications, storage devices, etc.
  • ZIF Telemetry Channel: The performance telemetry that is collected from source to target is passed through this channel to the big data platform.
  • Telemetry Data: Refers to the performance data and other metrics collected from all over the environment.
  • Telemetry Database: This is the big data platform, in which the telemetry data from all sources are captured and stored.
  • Intelligence Engine: This parses the telemetry data in near real time and raises notifications based on rule-based threshold and as well as through dynamic threshold.
  • Dashboard & Alerting Mechanism: These are the means through which the results of monitoring are conveyed as metrics in dashboard and as well as notifications.
  • Integration with Analyze, Predict & Remediate components: Monitoring module communicates the telemetry to Analyze & Predict components of the ZIF platform for it to use the data for analysis and apply Machine Learning for prediction. Both Monitor & Predict components, communicate with Remediate platform to trigger remediation.

The Monitor component works in tandem with Analyze, Predict and Remediate components of the ZIF platform to achieve an incident-free IT environment. Implementation of ZIF is the right step to driving an enterprise towards Zero Incidents. ZIF is the only platform in the industry that comes from the single product platform owner who owns the end-to-end IP of the solution with products developed from scratch.

For more detailed information on GAVS’ Monitor, or to request a demo please visit https://zif.ai/products/monitor/

The DevOps Synergy

Today’s mantra for software delivery is Agility. It is a huge differentiator that gives organizations a competitive edge and emboldens even fledgling start-ups to challenge giants in the IT industry. Traditional methods of software development have not been able to cope with today’s velocity of delivery and innovation demands, and are screaming for a lighter yet holistic approach. In traditional development models like Waterfall, phases of the development life cycle are followed sequentially: Requirements, Analysis & Design, Development, Integration & Testing, Deployment, and Maintenance, with documentation and sign-off at the end of each phase. This approach is heavy and documentation-intensive and is not quite responsive to the requirement or scope changes since it dictates strict adherence to the linear process model. And worse still, the end product of a long-drawn development cycle may not be quite what the customer expected!

Agile development broadly refers to methodologies like Scrum & Kanban that are based on iterative development/testing in short bursts called sprints, continuous feedback, retrospection, course-correction and constant collaboration amongst the teams involved & the customer. This facilitates incremental evolution of the software and adaptability.

What does DevOps bring to the table?

The idea behind DevOps is to foster a collaborative work culture within the organization where the Development, QA & ITOps teams work together as one cross-functional unit towards common goals. When the walls of team silos are broken down; teams integrate well with each other, and there is a free flow of communication, it percolates down to quicker, quality deliverables with very low failure rates. It is such a basic but powerful idea that makes us wonder why we didn’t do it all along!

DevOps practices help create a standardized and stable operating environment and eliminate the warring dynamics between Dev, QA & Ops teams; each working on their own agenda; refusing to take ownership and blaming each other when issues occur, because they are now one multi-disciplinary unit working together from day 01, with full control and autonomy over the entire software delivery process.

DevOps is a winning combination of healthy work culture, processes and tools/cloud services. A good DevOps implementation ensures the incremental evolution of software through processes like Continuous Integration, Continuous Delivery/Continuous Deployment (CI/CD), automated in a process pipeline with an integrated toolchain so that human-induced wait-time/lag does not hamper agility. Continuous Testing and Continuous Performance Monitoring are also an integral part of this pipeline. Although there will be differences in the DevOps implementation styles and the tools/services used, here’s a quick look at a typical CI/CD process.

Frequent commits and builds of small blocks of code is a DevOps best practice. This prevents the chaos that usually ensues when code from the different feature branches gets integrated into the main code branch, firing merge conflicts. Continuous Integration is a workflow strategy that involves compiling the committed code that is on a source repository like GitHub, Azure Repos or Bitbucket, validating it with automated static code analysis, unit, and integration tests. The quality of the test suite will determine the quality of the newly integrated code. Typically this involves an Integration server/CI service like Jenkins, Azure DevOps or GoCD that gets triggered on commit(or a pull request(PR) as the implementation maybe), that builds the code and runs automated tests.

Continuous Delivery(CD) is an extension of Continuous Integration(CI) where further tests such as UI/UX, Load, UAT, QA are done in varied environments, and finally deployed to a staging environment. This process makes it deploy-ready to production, based on approval.

Continuous Deployment is the same as Continuous Delivery with the exception that the deployable package is automatically promoted into production without the need for human approval. This is routinely done in highly mature DevOps implementations but is obviously too risky for those just starting out on their DevOps journeys. Such organizations could stay with Continuous Delivery until their DevOps environment stabilizes. For those progressing into Continuous Deployment, DevOps offers granular control for things like users to deploy to and the time of deployment.   

Infrastructure as Code (IaC) is another important DevOps practice used with Continuous Delivery. As the name suggests, this is the management of infrastructure using code where the desired configuration settings of the environment are specified as code. Every time this code is run, the same environment is generated. This solves issues arising out of inconsistencies in environment configurations and the need for manually maintaining the settings of deployment environments. The pipeline executes this code to configure multiple test targets, enabling application testing in simulated production environments. IaC typically follows application code versioning and is validated just like regular code. It also enables dynamic provisioning of environments on-demand.

DevOps Benefits

High delivery velocity and predictable quality are automatic outcomes of a good DevOps implementation since iterative development based on feedback & course-correction, continuous testing & monitoring are core to the methodology. No time is lost in big releases, managing the big release mayhem. Continuous customer feedback looped into the process helps avoid go-live surprises(read shocks) for the customer.

DevOps principles foster responsible autonomy and a spirit of collaboration where the entire cross-functional, cross-trained DevOps team works together and takes end-to-end responsibility from start to finish.  

Pipeline Automation injects speed, delivery predictability and enhances productivity by freeing human resources from manual tasks and giving them a sense of satisfaction and purpose as they are able to routinely see their work in customer’s hands.

Taking the plunge

Everyone wants to hop on the DevOps bandwagon but not all of them have clarity on where and how to start. DevOps is first and foremost a change in work culture and its implementation is primarily an exercise in changing mindsets and behavioural aspects at the workplace.

Importantly, the organization needs to arrive at clear objectives and expected business outcomes. As with most things, it’s always a good idea to start small, stabilize and use that as a pivot to move forward to the next baby step.  Piloting it on a low-risk application with few users will help everyone ease into the new style of working and give the DevOps movement some momentum. It would also help to initially run the current and DevOps environments in parallel to reduce risk.

Importantly, the initiative needs to be backed by the right processes, automation tools, and employee enablement by cross-training software developers, QA, and IT personnel, enabling them to take charge of the entire process.

Amazon and Netflix are fantastic examples of DevOps done right. Dave Hahn, SRE manager at Netflix says as a company, they don’t think about DevOps and that DevOps is just the wonderful result of a healthy culture and healthy thinking!  Netflix, according to Dave, has millions of customers across the globe, has hundreds of thousands of customer interactions every second, streams tens of billions of hours of entertainment every quarter and manages it all with just 10s of Ops engineers who are also software engineers! That’s the power of DevOps!

For information on our DevOps offerings, please click here.