RASA – an Open Source Chatbot Solution

Maruvada Deepti

Ever wondered if the agent you are chatting with online is a human or a robot? The answer would be the latter for an increasing number of industries. Conversational agents or chatbots are being employed by organizations as their first-line of support to reduce their response times.

The first generation of bots were not too smart, they could understand only a limited set of queries based on keywords. However, commoditization of NLP and machine learning by Wit.ai, API.ai, Luis.ai, Amazon Alexa, IBM Watson, and others, has resulted in intelligent bots.

What are the different chatbot platforms?

There are many platforms out there which are easy to use, like DialogFlow, Bot Framework, IBM Watson etc. But most of them are closed systems, not open source. These cannot be hosted on our servers and are mostly on-premise. These are mostly generalized and not very specific for a reason.

DialogFlow vs.  RASA

DialogFlow

  • Formerly known as API.ai before being acquired by Google.
  • It is a mostly complete tool for the creation of a chatbot. Mostly complete here means that it does almost everything you need for most chatbots.
  • Specifically, it can handle classification of intents and entities. It uses what it known as context to handle dialogue. It allows web hooks for fulfillment.
  • One thing it does not have, that is often desirable for chatbots, is some form of end-user management.
  • It has a robust API, which allows us to define entities/intents/etc. either via the API or with their web based interface.
  • Data is hosted in the cloud and any interaction with API.ai require cloud related communications.
  • It cannot be operated on premise.

Rasa NLU + Core

  • To compete with the best Frameworks like Google DialogFlow and Microsoft Luis, RASA came up with two built features NLU and CORE.
  • RASA NLU handles the intent and entity. Whereas, the RASA CORE takes care of the dialogue flow and guesses the “probable” next state of the conversation.
  • Unlike DialogFlow, RASA does not provide a complete user interface, the users are free to customize and develop Python scripts on top of it.
  • In contrast to DialogFlow, RASA does not provide hosting facilities. The user can host in their own sever, which also gives the user the ownership of the data.

What makes RASA different?

Rasa is an open source machine learning tool for developers and product teams to expand the abilities of bots beyond answering simple questions. It also gives control to the NLU, through which we can customize accordingly to a specific use case.

Rasa takes inspiration from different sources for building a conversational AI. It uses machine learning libraries and deep learning frameworks like TensorFlow, Keras.

Also, Rasa Stack is a platform that has seen some fast growth within 2 years.

RASA terminologies

  • Intent: Consider it as the intention or purpose of the user input. If a user says, “Which day is today?”, the intent would be finding the day of the week.
  • Entity: It is useful information from the user input that can be extracted like place or time. From the previous example, by intent, we understand the aim is to find the day of the week, but of which date? If we extract “Today” as an entity, we can perform the action on today.
  • Actions: As the name suggests, it’s an operation which can be performed by the bot. It could be replying something (Text, Image, Video, Suggestion, etc.) in return, querying a database or any other possibility by code.
  • Stories: These are sample interactions between the user and bot, defined in terms of intents captured and actions performed. So, the developer can mention what to do if you get a user input of some intent with/without some entities. Like saying if user intent is to find the day of the week and entity is today, find the day of the week of today and reply.

RASA Stack

Rasa has two major components:

  • RASA NLU: a library for natural language understanding that provides the function of intent classification and entity extraction. This helps the chatbot to understand what the user is saying. Refer to the below diagram of how NLU processes user input.
RASA Chatbot

  • RASA CORE: it uses machine learning techniques to generalize the dialogue flow of the system. It also predicts next best action based on the input from NLU, the conversation history, and the training data.

RASA architecture

This diagram shows the basic steps of how an assistant built with Rasa responds to a message:

RASA Chatbot

The steps are as follows:

  • The message is received and passed to an Interpreter, which converts it into a dictionary including the original text, the intent, and any entities that were found. This part is handled by NLU.
  • The Tracker is the object which keeps track of conversation state. It receives the info that a new message has come in.
  • The policy receives the current state of the tracker.
  • The policy chooses which action to take next.
  • The chosen action is logged by the tracker.
  • A response is sent to the user.

Areas of application

RASA is all one-stop solution in various industries like:

  • Customer Service: broadly used for technical support, accounts and billings, conversational search, travel concierge.
  • Financial Service: used in many banks for account management, bills, financial advices and fraud protection.
  • Healthcare: mainly used for fitness and wellbeing, health insurances and others

What’s next?

As any machine learning developer will tell you, improving an AI assistant is an ongoing task, but the RASA team has set their sights on one big roadmap item: updating to use the Response Selector NLU component, introduced with Rasa 1.3. “The response selector is a completely different model that uses the actual text of an incoming user message to directly predict a response for it.”

References:

https://rasa.com/product/features/

https://rasa.com/docs/rasa/user-guide/rasa-tutorial/

About the Author –

Deepti is an ML Engineer at Location Zero in GAVS. She is a voracious reader and has a keen interest in learning newer technologies. In her leisure time, she likes to sing and draw illustrations.
She believes that nothing influences her more than a shared experience.

JAVA – Cache Management

Sivaprakash Krishnan

This article explores the offering of the various Java caching technologies that can play critical roles in improving application performance.

What is Cache Management?

A cache is a hot or a temporary memory buffer which stores most frequently used data like the live transactions, logical datasets, etc. This intensely improves the performance of an application, as read/write happens in the memory buffer thus reducing retrieval time and load on the primary source. Implementing and maintaining a cache in any Java enterprise application is important.

  • The client-side cache is used to temporarily store the static data transmitted over the network from the server to avoid unnecessarily calling to the server.
  • The server-side cache could be a query cache, CDN cache or a proxy cache where the data is stored in the respective servers instead of temporarily storing it on the browser.

Adoption of the right caching technique and tools allows the programmer to focus on the implementation of business logic; leaving the backend complexities like cache expiration, mutual exclusion, spooling, cache consistency to the frameworks and tools.

Caching should be designed specifically for the environment considering a single/multiple JVM and clusters. Given below multiple scenarios where caching can be used to improve performance.

1. In-process Cache – The In-process/local cache is the simplest cache, where the cache-store is effectively an object which is accessed inside the application process. It is much faster than any other cache accessed over a network and is strictly available only to the process that hosted it.

Data Center Consolidation Initiative Services

  • If the application is deployed only in one node, then in-process caching is the right candidate to store frequently accessed data with fast data access.
  • If the in-process cache is to be deployed in multiple instances of the application, then keeping data in-sync across all instances could be a challenge and cause data inconsistency.
  • An in-process cache can bring down the performance of any application where the server memory is limited and shared. In such cases, a garbage collector will be invoked often to clean up objects that may lead to performance overhead.

In-Memory Distributed Cache

Distributed caches can be built externally to an application that supports read/write to/from data repositories, keeps frequently accessed data in RAM, and avoid continuous fetching data from the data source. Such caches can be deployed on a cluster of multiple nodes, forming a single logical view.

  • In-memory distributed cache is suitable for applications running on multiple clusters where performance is key. Data inconsistency and shared memory aren’t matters of concern, as a distributed cache is deployed in the cluster as a single logical state.
  • As inter-process is required to access caches over a network, latency, failure, and object serialization are some overheads that could degrade performance.

2. In-memory database

In-memory database (IMDB) stores data in the main memory instead of a disk to produce quicker response times. The query is executed directly on the dataset stored in memory, thereby avoiding frequent read/writes to disk which provides better throughput and faster response times. It provides a configurable data persistence mechanism to avoid data loss.

Redis is an open-source in-memory data structure store used as a database, cache, and message broker. It offers data replication, different levels of persistence, HA, automatic partitioning that improves read/write.

Replacing the RDBMS with an in-memory database will improve the performance of an application without changing the application layer.

3. In-Memory Data Grid

An in-memory data grid (IMDG) is a data structure that resides entirely in RAM and is distributed among multiple servers.

Key features

  • Parallel computation of the data in memory
  • Search, aggregation, and sorting of the data in memory
  • Transactions management in memory
  • Event-handling

Cache Use Cases

There are use cases where a specific caching should be adapted to improve the performance of the application.

1. Application Cache

Application cache caches web content that can be accessed offline. Application owners/developers have the flexibility to configure what to cache and make it available for offline users. It has the following advantages:

  • Offline browsing
  • Quicker retrieval of data
  • Reduced load on servers

2. Level 1 (L1) Cache

This is the default transactional cache per session. It can be managed by any Java persistence framework (JPA) or object-relational mapping (ORM) tool.

The L1 cache stores entities that fall under a specific session and are cleared once a session is closed. If there are multiple transactions inside one session, all entities will be stored from all these transactions.

3. Level 2 (L2) Cache

The L2 cache can be configured to provide custom caches that can hold onto the data for all entities to be cached. It’s configured at the session factory-level and exists as long as the session factory is available.

  • Sessions in an application.
  • Applications on the same servers with the same database.
  • Application clusters running on multiple nodes but pointing to the same database.

4. Proxy / Load balancer cache

Enabling this reduces the load on application servers. When similar content is queried/requested frequently, proxy takes care of serving the content from the cache rather than routing the request back to application servers.

When a dataset is requested for the first time, proxy saves the response from the application server to a disk cache and uses them to respond to subsequent client requests without having to route the request back to the application server. Apache, NGINX, and F5 support proxy cache.

Desktop-as-a-Service (DaaS) Solution

5. Hybrid Cache

A hybrid cache is a combination of JPA/ORM frameworks and open source services. It is used in applications where response time is a key factor.

Caching Design Considerations

  • Data loading/updating
  • Performance/memory size
  • Eviction policy
  • Concurrency
  • Cache statistics.

1. Data Loading/Updating

Data loading into a cache is an important design decision to maintain consistency across all cached content. The following approaches can be considered to load data:

  • Using default function/configuration provided by JPA and ORM frameworks to load/update data.
  • Implementing key-value maps using open-source cache APIs.
  • Programmatically loading entities through automatic or explicit insertion.
  • External application through synchronous or asynchronous communication.

2. Performance/Memory Size

Resource configuration is an important factor in achieving the performance SLA. Available memory and CPU architecture play a vital role in application performance. Available memory has a direct impact on garbage collection performance. More GC cycles can bring down the performance.

3. Eviction Policy

An eviction policy enables a cache to ensure that the size of the cache doesn’t exceed the maximum limit. The eviction algorithm decides what elements can be removed from the cache depending on the configured eviction policy thereby creating space for the new datasets.

There are various popular eviction algorithms used in cache solution:

  • Least Recently Used (LRU)
  • Least Frequently Used (LFU)
  • First In, First Out (FIFO)

4. Concurrency

Concurrency is a common issue in enterprise applications. It creates conflict and leaves the system in an inconsistent state. It can occur when multiple clients try to update the same data object at the same time during cache refresh. A common solution is to use a lock, but this may affect performance. Hence, optimization techniques should be considered.

5. Cache Statistics

Cache statistics are used to identify the health of cache and provide insights about its behavior and performance. Following attributes can be used:

  • Hit Count: Indicates the number of times the cache lookup has returned a cached value.
  • Miss Count: Indicates number of times cache lookup has returned a null or newly loaded or uncached value
  • Load success count: Indicates the number of times the cache lookup has successfully loaded a new value.
  • Total load time: Indicates time spent (nanoseconds) in loading new values.
  • Load exception count: Number of exceptions thrown while loading an entry
  • Eviction count: Number of entries evicted from the cache

Various Caching Solutions

There are various Java caching solutions available — the right choice depends on the use case.

Software Test Automation Platform

At GAVS, we focus on building a strong foundation of coding practices. We encourage and implement the “Design First, Code Later” principle and “Design Oriented Coding Practices” to bring in design thinking and engineering mindset to build stronger solutions.

We have been training and mentoring our talent on cutting-edge JAVA technologies, building reusable frameworks, templates, and solutions on the major areas like Security, DevOps, Migration, Performance, etc. Our objective is to “Partner with customers to realize business benefits through effective adoption of cutting-edge JAVA technologies thereby enabling customer success”.

About the Author –

Sivaprakash is a solutions architect with strong solutions and design skills. He is a seasoned expert in JAVA, Big Data, DevOps, Cloud, Containers, and Micro Services. He has successfully designed and implemented a stable monitoring platform for ZIF. He has also designed and driven Cloud assessment/migration, enterprise BRMS, and IoT-based solutions for many of our customers. At present, his focus is on building ‘ZIF Business’ a new-generation AIOps platform aligned to business outcomes.

Hyperautomation

Machine learning service provider

Bindu Vijayan

According to Gartner, “Hyper-automation refers to an approach in which organizations rapidly identify and automate as many business processes as possible. It involves the use of a combination of technology tools, including but not limited to machine learning, packaged software and automation tools to deliver work”.  Hyper-automation is to be among the year’s top 10 technologies, according to them.

It is expected that by 2024, organizations will be able to lower their operational costs by 30% by combining hyper-automation technologies with redesigned operational processes. According to Coherent Market Insights, “Hyper Automation Market will Surpass US$ 23.7 Billion by the end of 2027.  The global hyper automation market was valued at US$ 4.2 Billion in 2017 and is expected to exhibit a CAGR of 18.9% over the forecast period (2019-2027).”

How it works

To put it simply, hyper-automation uses AI to dramatically enhance automation technologies to augment human capabilities. Given the spectrum of tools it uses like Robotic Process Automation (RPA), Machine Learning (ML), and Artificial Intelligence (AI), all functioning in sync to automate complex business processes, even those that once called for inputs from SMEs,  implies this is a powerful tool for organisations in their digital transformation journey.

Hyperautomation allows for robotic intelligence into the traditional automation process, and enhances the completion of processes to make it more efficient, faster and errorless.  Combining AI tools with RPA, the technology can automate almost any repetitive task; it automates the automation by identifying business processes and creates bots to automate them. It calls for different technologies to be leveraged, and that means the businesses investing in it should have the right tools, and the tools should be interoperable. The main feature of hyperautomation is, it merges several forms of automation and works seamlessly together, and so a hyperautomation strategy can consist of RPA, AI, Advanced Analytics, Intelligent Business Management and so on. With RPA, bots are programmed to get into software, manipulate data and respond to prompts. RPA can be as complex as handling multiple systems through several transactions, or as simple as copying information from applications. Combine that with the concept of Process Automation or Business Process Automation which enables the management of processes across systems, it can help streamline processes to increase business performance.    The tool or the platform should be easy to use and importantly scalable; investing in a platform that can integrate with the existing systems is crucial. The selection of the right tools is what  Gartner calls “architecting for hyperautomation.”

Impact of hyperautomation

Hyperautomation has a huge potential for impacting the speed of digital transformation for businesses, given that it automates complex work which is usually dependent on inputs from humans. With the work moved to intelligent digital workers (RPA with AI) that can perform repetitive tasks endlessly, human performance is augmented. These digital workers can then become real game-changers with their efficiency and capability to connect to multiple business applications, discover processes, work with voluminous data, and analyse in order to arrive at decisions for further / new automation.

The impact of being able to leverage previously inaccessible data and processes and automating them often results in the creation of a digital twin of the organization (DTO); virtual models of every physical asset and process in an organization.  Sensors and other devices monitor digital twins to gather vital information on their condition, and insights are gathered regarding their health and performance. As with data, the more data there is, the systems get smarter with it, and are able to provide sharp insights that can thwart problems, help businesses make informed decisions on new services/products, and in general make informed assessments. Having a DTO throws light on the hitherto unknown interactions between functions and processes, and how they can drive value and business opportunities.  That’s powerful – you get to see the business outcome it brings in as it happens or the negative effect it causes, that sort of intelligence within the organization is a powerful tool to make very informed decisions.

Hyperautomation is the future, an unavoidable market state

hyperautomation is an unavoidable market state in which organizations must rapidly identify and automate all possible business processes.” – Gartner

It is interesting to note that some companies are coming up with no-code automation. Creating tools that can be easily used even by those who cannot read or write code can be a major advantage – It can, for e.g., if employees are able to automate the multiple processes that they are responsible for, hyperautomation can help get more done at a much faster pace, sparing time for them to get involved in planning and strategy.  This brings more flexibility and agility within teams, as automation can be managed by the teams for the processes that they are involved in.

Conclusion

With hyperautomation, it would be easy for companies to actually see the ROI they are realizing from the amount of processes that have been automated, with clear visibility on the time and money saved. Hyperautomation enables seamless communication between different data systems, to provide organizations flexibility and digital agility. Businesses enjoy the advantages of increased productivity, quality output, greater compliance, better insights, advanced analytics, and of course automated processes. It allows machines to have real insights on business processes and understand them to make significant improvements.

“Organizations need the ability to reconfigure operations and supporting processes in response to evolving needs and competitive threats in the market. A hyperautomated future state can only be achieved through hyper agile working practices and tools.”  – Gartner

References:

Assess Your Organization’s Maturity in Adopting AIOps

IT operations analytics

Anoop Aravindakshan

Artificial Intelligence for IT operations (AIOps) is adopted by organizations to deliver tangible Business Outcomes. These business outcomes have a direct impact on companies’ revenue and customer satisfaction.

A survey from AIOps Exchange 2019, reports that 84% of business owners who attended the survey, confirmed that they are actively evaluating AIOps to be adopted in their organizations.

So, is AIOps just automation? Absolutely NOT!

Artificial Intelligence for IT operations implies the implementation of true Autonomous Artificial Intelligence in ITOps, which needs to be adopted as an organization-wide strategy. Organizations will have to assess their existing landscape, processes, and decide where to start. That is the only way to achieve the true implementation of AIOps.

Every organization trying to evaluate AIOps as a strategy should read through this article to understand their current maturity, and then move forward to reach the pinnacle of Artificial Intelligence in IT Operations.

The primary success factor in adopting AIOps is derived from the Business Outcomes the organization is trying to achieve by implementing AIOps – that is the only way to calculate ROI.

There are 4 levels of Maturity in AIOps adoption. Based on our experience in developing an AIOps platform and implementing the platform across multiple industries, we have arrived at these 4 levels. Assessing an organization against each of these levels, helps in achieving the goal of TRUE Artificial Intelligence in IT Operations.

Level 1: Knee-jerk

Events, logs are generated in silos and collected from various applications and devices in the infrastructure. These are used to generate alerts that are commissioned to command centres to escalate as per the SOPs (standard operating procedures) defined. The engineering teams work in silos, not aware of the business impact that these alerts could potentially create. Here, operations are very reactive which could cost the organization millions of dollars.

Level 2: Unified

All events, logs, and alerts are integrated into one central locale. ITSM processes are unified. This helps in breaking silos and engineering teams are better prepared to tackle business impacts. SOPs have been adjusted since the process is unified, but this is still reactive incident management.

Level 3: Intelligent

Machine Learning algorithms (either supervised or unsupervised) have been implemented on the unified data to derive insights. There are baseline metrics that are calibrated and will be used as a reference for future events. With more data, the metrics get richer. IT operations team can correlate incidents / events with business impacts by leveraging AI & ML. If Mean-Time-To-Resolve (MTTR) an incident has been reduced by automated identification of the root cause, then the organization has attained level 3 maturity in AIOps.

Level 4: Predictive & Autonomous

The pinnacle of AIOps is level 4. If incidents and performance degradation of applications can be predicted by leveraging Artificial Intelligence, it implies improved application availability. Autonomous remediation bots can be triggered spontaneously based on the predictive insights, to fix incidents that are prone to happen in the enterprise. Level 4 is a paradigm shift in IT operations – moving operations entirely from being reactive, to becoming proactive.

Conclusion

As IT operations teams move up each level, the essential goal to keep in mind is the long-term strategy that needs to be attained by adopting AIOps. Artificial Intelligence has matured over the past few decades, and it is up to AIOps platforms to embrace it effectively. While choosing an AIOps platform, measure the maturity of the platform’s artificial intelligent coefficient.

About the Author:

An evangelist of Zero Incident FrameworkTM, Anoop has been a part of the product engineering team for long and has recently forayed into product marketing. He has over 14 years of experience in Information Technology across various verticals, which include Banking, Healthcare, Aerospace, Manufacturing, CRM, Gaming and Mobile.

Creating Purposeful Corporations, In pursuit of Conscious Capitalism

Gavs technologies ceo

Sumit Ganguli

More than 8 million metric tons of plastic leak into the ocean every year, so building infrastructure that stops plastic before it gets into the ocean is key to solving this issue,” said H. Fisk Johnson, Chairman, and CEO of SC Johnson. SC Johnson, an industry-leading manufacturer of household consumer brands, has launched a global partnership to stop plastic waste from entering the ocean and fight poverty.

In August 2019, after 42 years of its inception, Business Roundtable,  that has periodically issued Principles of Corporate Governance, with emphasis on serving shareholders, has released a new statement of Purpose of a Corporation. This new statement was signed by 181 CEOs who have committed to lead their companies to benefit all stakeholders – customers, employees, suppliers, communities and shareholders.  Jamie Dimon, Chairman and CEO of JPMorgan Chase & Co., is the Chairman of Business Roundtable. He went on to say, “The American dream is alive, but fraying,” “Major employers are investing in their workers and communities because they know it is the only way to be successful over the long term. These modernized principles reflect the business community’s unwavering commitment to continue to push for an economy that serves all Americans.

Today the definition of corporate purpose seems to be changing. Companies are now focused on the environment and all the stakeholders.  There is a growing ambivalence about Capitalism that only promoted the pursuit of wealth, according to a Harvard Business School survey.

But this is a far cry from when we were growing up in India as youths, in the 1980s. Our definition of personal success was to expeditiously acquire wealth. Most of us who were studying Engineering, Medicine or pursuing other professional degrees, were all looking for a job that would sustain us and support our immediate family. The other option was to emigrate to America or other developed countries, for further studies and make a life here – to celebrate Capitalism in all its glory. 

In India, we were quite steeped in religious festivals and rituals. We attended Baal Mandir and had moral science in school, but the concept of Service, Altruism,  Seva, Sharing were largely platitudes and they were not a part of our daily lives.  There was an inbuilt cynicism about charity and we never felt that when we grow up, we need to think about the greater good of the society. 

And that is where Conscious Capitalism comes in. Instead of espousing Ayn Rand’s version of scorched earth capitalism, “ Selfishness is a Virtue”, or blindly following  Gordon Gekko’s “Greed is good”,  the media, parents, teachers, influence makers could promote and ingrain in all of the youth, students and people at large that there is merit in wealth creation, but it could be infused with altruism. We could celebrate the successful who also share. This could dispel the notion that charity and sharing of wealth is only for the rich and the famous.  

ai automation in cloud computing

America gets criticized for many things around the world, but often the world overlooks that the largest amount of charity and donations have been from the USA.  Bill & Melinda Gates Foundation, Warren Buffet, Larry Elison of Oracle who has pledged a significant portion of his wealth to the Bill & Melinda Gates Foundation, Mark Zuckerberg of Facebook and many others have absolutely embraced the concept of Conscious Capitalism for their corporations. But what would really broaden the pyramid, would be when early entrepreneurs and upcoming executives are also engaged in sharing and giving, and not wait till they reach the pinnacle of success. We cannot expect only governmental initiatives to support the underprivileged. We need to celebrate Conscious Capitalism and entrepreneurs and business leaders who are pursuing their dreams and are also sharing some portion of their wealth with the society.

At GAVS and through the Private Equity firm Basil Partners we are privileged to have been involved in an initiative to nurture and support a small isolated village named Ramanwadi in Maharashtra, through a project named Venu Madhuri (www.venumadhuri.org).  The volunteers involved in supporting this small village have brought success in several areas of rural development and the small hamlet is inching towards self-sufficiency.

Basil Partners along with Apar Industries seed-funded the Midday meal program, (www.annamrita.org)  that feeds almost 1.26 Million school students per day in Mumbai; and have promoted the Bhakti Vedanta Hospital in Mumbai.

These are all very humble efforts compared to some of the massive projects undertaken by the largest of groups and individuals. However, they all make a difference. I truly believe that we need to internalize some of the credo and values that have been espoused by H Fisk Johnson & the work companies like SC Johnson is doing, emulate Azim Premji, Satya Nadella and many others. They are the true ambassadors of Conscious Capitalism and are creating purposeful corporations. 

Potential shifts in the world, #COVID-19

Saji Rajasekaran

Apart from the tremendous number of lives lost and the huge impact on several industries and jobs, COVID-19 has caused a lot of pain and distress. However, it has also shined light on a few areas that we can hope will see a positive impact, short-term or long-term.

Mother Earth – Less people commuting, less aircraft’s in the air and less cars on the road means cleaner air, at least in the short-term.

Healthcare Policies – Could the delays in tests, lack of enough infrastructure to screen and poor emergency management procedures hopefully drive a debate in changing our healthcare policies for the better?

Focusing on the family – People are spending more time with family. This could be good or bad, I guess, but the shutdown has afforded many families time to be around each more than ever.

Better hygiene and better eating habits – Will this experience, at least temporarily help teach our generation to have better hygiene and help build better eating habits?

E-Learning – Could this experience provide the experience needed to make e-learning more acceptable and potentially make University education cheaper in the long-term?

Internet infrastructure – Teleworking and e-learning will stretch the internet bandwidth in homes and neighborhoods; Will this prompt the industry to speed up their investment in better hi-speed infrastructure?

Increased investment in poorer countries – The awareness that borders don’t quite stop viruses or the associated economic meltdowns in an increasingly connected world, hopefully changes the way developed countries treat poorer countries.

Growth in specific industries – Should we expect a growth spurt for cashless transactions, online grocery shopping/delivery, tele-medicine, and community based organic farming?

About the author:

Saji is a father to 2 kids, Executive, and figuring out how to make more time to do things he wants to do; in that order. He has 20 years of experience leading successful teams in various industry domains and holds a Masters in Business Administration from UNC Kenan-Flagler Business School.