The Healthcare Industry is going through a quiet revolution. Factors like disease trends, doctor demographics, regulatory policies, environment, technology etc., are forcing the industry to turn to emerging technologies like AI, to help adapt to the pace of change. Here, we take a look at some key use cases of AI in Healthcare.
Medical
Imaging
The application of Machine
Learning (ML) in Medical Imaging is showing highly encouraging results. ML is a
subset of AI, where algorithms and models are used to help machines imitate the
cognitive functions of the human brain and to also self-learn from their experiences.
AI can be gainfully used
in the different stages of medical imaging- in acquisition, image
reconstruction, processing, interpretation, storage, data mining & beyond. The
performance of ML computational models improves tremendously as they get exposed
to more & more data and this foundation on colossal amounts of data enables
them to gradually better humans at interpretation. They begin to detect
anomalies not perceptible to the human eye & not discernible to the human
brain!
What goes hand-in-hand
with data, is noise. Noise creates artifacts in images and reduces its quality,
leading to inaccurate diagnosis. AI systems work through the clutter and aid
noise-reduction leading to better precision in diagnosis, prognosis, staging, segmentation
and treatment.
At the forefront of this
use case is Radio genomics- correlating cancer imaging features and gene
expression. Needless to say, this will play a pivotal role in cancer research.
Drug
Discovery
Drug Discovery is an arduous process that takes several years from the start of research to obtaining approval to market. Research involves laboring through copious amounts of medical literature to identify the dynamics between genes, molecular targets, pathways, candidate compounds. Sifting through all of this complex data to arrive at conclusions is an enormous challenge. When this voluminous data is fed to the ML computational models, relationships are reliably established. AI-powered by domain knowledge is slashing downtime & cost involved in new drug development.
Cybersecurity
in Healthcare
Data security is of paramount importance to Healthcare providers who need to ensure confidentiality, integrity, and availability of patient data. With cyberattacks increasing in number and complexity, these formidable threats are giving security teams sleepless nights! The main strength of AI is its ability to curate massive quantities of data- here threat intelligence, nullify the noise, provide instant insights & self-learn in the process. The predictive & Prescriptive capabilities of these computational models drastically reduces response time.
Virtual
Health assistants
Virtual Health
assistants like Chatbots, give patients 24/7 access to critical information, in
addition to offering services like scheduling health check-ups or setting up
appointments. AI-based platforms for wearable health devices and health apps come
armed with loads of features to monitor health signs, daily activities, diet, sleep
patterns etc. and provide alerts for immediate action or suggest personalized
plans to enable healthy lifestyles.
AI
for Healthcare IT Infrastructure
Healthcare IT Infrastructure running critical applications that enable patient care, is the heart of a Healthcare provider. With dynamically changing IT landscapes that are distributed, hybrid & on-demand, IT Operations teams are finding it hard to keep up. Artificial Intelligence for IT Ops (AIOps) is poised to fundamentally transform the Healthcare Industry. It is powering Healthcare Providers across the globe, who are adopting it to Automate, Predict, Remediate & Prevent Incidents in their IT Infrastructure. GAVS’ Zero Incident FrameworkTM (ZIF) – an AIOps Platform, is a pure-play AI platform based on unsupervised Machine Learning and comes with the full suite of tools an IT Infrastructure team would need. Please watch this video to learn more.
Call it big data or the big bang of
data – we’re in an era of data explosion. Our daily lives generate an enormous amount
of data. Let’s do some simple math. About 12 billion ‘smart’ machines are
connected to the Internet. Considering there are about 7 billion people on the
planet, we have almost 1.5 devices per person. The data produced every year is in
exabytes and is growing exponentially. There’s always been a search for an
infrastructure to handle this amount of data. One such venture at LinkedIn in
2010 fructified in the creation of Kafka. It was later donated to Apache
foundation and now it’s called Apache Kafka.
What’s Kafka?
Apache Kafka is a Versatile,
Distributed, & Replicated publish-subscribe messaging system. It lets you
send messages between processes, applications, and servers.
To understand what a publish-subscribe
messaging system is, understanding how a point-to-point messaging system works
is important. In a point-to-point messaging system, messages are kept in a
queue, and multiple consumers can consume the messages. Once a message is
consumed it disappears from the queue.
In a publish/subscribe system,
messages are persisted in a topic. Unlike in a point-to-point system, consumers
can subscribe to one or more topics and consume all messages on that topic.
Different consumers can consume messages and remain on the topic so another
consumer can receive the same information again. Hence, Kafka is a
publish-subscribe messaging system.
“More
than 33% of all Fortune 500 companies use Kafka.”
Apache Kafka is a distributed real-time
streaming platform, but in the eyes of a developer it’s an advanced version of
a log which is distributed and structured.
Why Kafka?
The two major concerns of Big Data are
to collect it and to be able to analyze it. A messaging system like Kafka can
help overcome these challenges. This allows the applications to focus on the
data without worrying about how to share it. For systems which have high
throughput, Kafka works much better than traditional messaging systems. It also
has better partitioning, replication, and fault-tolerance which makes it a
great fit for systems which process large-scale messages.
Following are the reasons to choose Kafka
over any other messaging system:
One of the most powerful
event streaming platforms available open source
Offers solid
horizontal scalability
A perfect fit for
big data projects involving real-time processing
Durably stores
the data using Distributed commit log meaning that the data is persisted on the
disk
High reliability,
since it is distributed and replicated.
Has excellent
parallelism since the topics are partitioned.
Core components
of Kafka
Kafka’s main architectural components
include Producers, Topics, Consumers, Consumer Groups, Clusters, Brokers,
Partitions, Replicas, Leaders, and Followers.
Records: Data is stored in the form of key value pair with time stamp which is called Record. Kafka Records are immutable.
Topic: Topics are a stream of records and are subscribed to by multiple consumers. It’s the highest level of abstraction that Kafka provides.
Partition (A Structured commit log): It’s an ordered, immutable sequence of messages that are continually appended to. It can’t be divided across brokers or even disks. The memory needs to be contiguous. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.
Segments: Each partition is sub-divided into segments. Instead of storing all the messages of a partition in a single file, Kafka splits them into chunks called segments. The default value for segment size is a high value (1 GB).
Brokers: A Kafka broker (also called node or server) hosts topics. A Kafka broker receives messages from producers and stores them on disk by assigning them a unique offset. A Kafka broker allows consumers to fetch messages by topic, partition and offset.
Zookeeper: Zookeeper is a centralized service which is used to maintain naming and configuration data and to provide flexible and robust synchronization within distributed systems like Kafka. Zookeeper has the responsibility to maintain the leader-follower relationship across all the partitions.
Cluster: Multiple Kafka brokers join to form a cluster. The Kafka brokers could be distributed in different data centers and physical locations for redundancy and stability. The Kafka brokers communicate between themselves using zookeeper.
Replication: All distributed systems must make trade-offs between guaranteeing consistency, availability, and partition tolerance (CAP Theorem). Apache Kafka’s design focuses on maintaining highly available and strongly consistent replicas. Strong consistency means that all replicas are byte-to-byte identical, which simplifies the job of an application developer.
Producer: Kafka producers send records to topics. The records are sometimes referred to as messages. While producers can only message to one topic at a time, they’re able to send messages asynchronously. Using this technique allows a producer to functionally send multiple messages to multiple topics at once. Because Kafka is designed for broker scalability and performance, producers (rather than brokers) are responsible for choosing which partition each message is sent to.
Consumer: Kafka consumers read from Topics. Kafka consumer maintains the partition offset to consume messages or data from topic, since Kafka brokers are stateless. The consumers can rewind or skip to any point in a partition simply by supplying an offset value.
Consumers Group: Multiple consumers who are interested in the same topic join to form a Consumer group which is uniquely identified by group.id. Each consumer group is a subscriber to one or more topics and maintains its offset per topic.
How does Apache Kafka
work?
When applications send data to a Kafka
Broker (Node), the data gets stored in a topic, which is the logical grouping
of partitions. Partitions are the actual unit of storage. In a multi node
configuration, the data is spread over multiple partitions across different
machines. Now the data sent to the Kafka Cluster is durably persisted to a
partition. As explained before a partition is an immutable data structure,
where data can only be appended.
The data is sent to a partition based
on the following rules,
If a producer specifies a partition number in
the message record, then the message is persisted to that topic.
If a message record doesn’t have any partition
id but has a key, then based on the hash value of the key, the partition is
chosen.
hashCode(key) % noOfPartitions
If no key or partition id is present, then Kafka
uses round-robin strategy to choose the partition.
To achieve parallelism, each topic can
have multiple partitions. Number of partitions is directly proportional to
throughput and parallel access.
In a distributed environment, even
though a topic has multiple partitions, each partition is tied to a single
broker only, it’s not shared among the nodes.
What if a Kafka
node fails and all the partitions tied to that node become unavailable?
To overcome this scenario, Kafka uses
replication. A duplicate of each partition is maintained in all the nodes. At
all times, one broker ‘owns’ a partition and is the node through which
applications write/read from the partition. This is called a partition
leader. It replicates the data it receives to other brokers, called followers.
They store the data as well and are ready to be elected as the leader in case
the leader node dies.
At any point of time, all the replicas
will be identical to the leader (original) partition. This is called In-Sync
Replication. For a producer/consumer to write/read from a partition, they need
to know its leader so, this information needs to be available from somewhere. Kafka
stores such metadata in Zookeeper.
Inside the partition’s directory in
the Kafka data directory, the segments can be viewed as index and log files.
/opt/Kafka-logs # tree
Why does each
segment have .index file accompanied by .log file?
One of the common operations in Kafka
is to read the message at a particular offset. For this, if it has to go to the
log file to find the offset, it becomes an expensive task especially because
the log file can grow to huge sizes (Default – 1G). This is where the .index
file becomes useful. Index file stores the offset and physical position of the
message in the log file.
The 00000000000000077674 before the log, index and time index file is the name of the segment. Each segment file is created with the offset of the first message as its file name. In the above picture, 00000000000000077674.log implies that it has the messages starting from offset 77674.
The index file describes only 2
fields, each of them 32 bit long:
4 Bytes: Relative
Offset
4 Bytes: Physical
Position
As explained, the file name represents
the base offset. In contrast to the log file where the offset is incremented
for each message, the messages within the index files contain relative offsets
to the base offset. The second field represents the physical position of the
related log message (base offset + relative offset).
If you need to read the message at
offset 77723, you first search for it in the index file and figure out that the
message is in position 77713. Then you directly go to position 77713 in the log
file and start reading. This makes it effective as you can use binary search to
quickly get to the correct offset in the already sorted index file.
From the above explanation, we can derive the time complexity for a few scenarios,
To find a
particular partition = O(1) Constant time
since the broker
knows where the partition resides in for a given topic.
To find a
segment in a partition = O(log(n))
since the
first part of the segment log file indicates the first message offset. So, the binary
search can be used to find the right segment.
To find a
message in a segment = O(log(n))
The index file contains the exact position of a message in the log file for all the messages in the ascending order of the offsets. The offset can be found using a binary search.
Now using one of the above scenarios,
the Kafka will locate the message and serve the consumers.
As apache Kafka is a pub-sub messaging system, the consumers who have subscribed to a topic will receive the data and can consume it. The consumer stores (commits) the offset every time it pulls data from a topic. By specifying the offset next time, the consumers pull data from the topic from the offset it mentioned, in a reliable way.
BBC’s ‘Monty Python’, a comedy
series, that aired during the late 1960s was a huge hit. The Python programming
language, released in early 1990s, turned out to be a huge hit too in the
software fraternity. Reasons for the hit runs into a long list —
be it the dynamic typing, cross-platform portability, enforced readability of
code, or a faster development turnaround.
Python was conceived by a Dutch
programmer, Guido Van Rossum, who invented it during his Christmas holidays.
The ascent of the language has been observed since 2014 owing to its popularity in the Data science and AI domains. See the Google Trends report in Exhibit 1. No wonder Python has risen to the 3rd position in the latest TIOBE programming index.
Exhibit 1: Google Trends
Report
Exhibit 2: TIOBE index
Python is being used by a
surprisingly wide array of domains/ industries. The power of Python is
exploited in the development of various popular web applications like YouTube, DropBox
and BitTorrent. NASA has used it in space shuttle mission design and in the discovery
of ‘Higgs-boson’ or God particle. The top security agency NSA used it for
cryptography, thanks to its rich set of modules. It has also been used by entertainment
giants like Disney and Sony DreamWorks to develop games and movies.
Now that the data is becoming ‘BIG’,
programmers are resorting to Python for web scraping/sentiment analysis. Think
of Big Data and the first technology that comes to a programmer’s mind in
processing it (ETL and data mining) is Python.
Learning Python is quite fun.
Thanks to an innovative project called Jupyter, even a person who is getting
his feet wet in programming can quickly learn the concepts.
Possessing the features of both
scripting languages like TCL, Perl, Scheme and systems programming languages
like C++, C and Java, Python is easy to run and code.
Show a Java program and a Python
script to a novice programmer; he will definitely find the Python code more
readable. It is a language that enforces indentation. That is why no Python
code looks ‘ugly’. The source code is first converted to platform independent
byte code making Python a cross platform language. You don’t need to compile and
run, unlike C and C++, thus making the life of software developers easier.
Let’s draw a comparison between
Python and C++. The former is an interpreted language while the latter is a
compiled one. C++ follows a two-stage execution model while Python scripts
bypass the compilation stage.
In C++, you use a compiler that
converts your source code into machine code and produces an executable. The
executable is a separate file that can then be run as a stand-alone program.
Exhibit 3
This process outputs actual
machine instructions for the specific processor and operating system it’s built
for. As shown in Exhibit 4, you’d have to recompile your program separately for
Windows, Mac, and Linux:
Exhibit 4
You’ll likely need to modify your
C++ code to run on those different systems as well.
Python, on the other hand, uses a
different process. Now, remember that you’ll be looking at CPython, written in
C, which is the standard implementation for the language. Unless you’re doing
something special, this is the Python you’re running. CPython is faster than
Jython (Java implementation of Python) or IronPython (Dot net implementation).
Python runs each time you execute
your program. It compiles your source just like the C++ compiler. The
difference is that Python compiles to bytecode instead of native machine code.
Bytecode is the native instruction code for the Python virtual machine. To
speed up subsequent runs of your program, Python stores the bytecode in .pyc
files:
Exhibit 5
If you’re using Python 2, then
you’ll find these files next to the .py files. For Python 3, you’ll find them
in a __pycache__ directory. Python 2 and 3 are two major releases of Python,
and 2.x will be obsolete by the year 2020. Python 3 is the preferred version
among the development fraternity, thanks to its advanced features and optimized
functionalities. The latest Python version is 3.7.
The generated bytecode doesn’t
run natively on your processor. Instead, it’s run by the Python virtual
machine. This is similar to the Java virtual machine or the .NET Common Runtime
Environment. The initial run of your code will result in a compilation step.
Then, the bytecode will be interpreted to run on your specific hardware.
Exhibit 6
If the program hasn’t been
changed, each subsequent run will skip the compilation step and use the
previously compiled bytecode to interpret:
Exhibit 7
Interpreting code is going to be
slower than running native code directly on the hardware. So why does Python
work that way? Well, interpreting the code in a virtual machine means that only
the virtual machine needs to be compiled for a specific operating system on a
specific processor. All the Python code it runs will run on any machine that
has Python.
Another feature of this
cross-platform support is that Python’s extensive standard library is written
to work on all operating systems.
Using pathlib (a Python module), for example, will manage path separators for you whether you’re on Windows, Mac, or Linux. The developers of those libraries spent a lot of time making it portable, so you don’t need to worry about it in your Python program!
Python’s philosophy is
“Everything in Python is an object”, just like in Linux where “Everything in
Linux is a file”. By designing the core data types as objects, one can leverage
the power of attributes of an object for solving problems. Every object or
every datatype will have a unique set of attributes.
It can interact with all databases including SQL databases such as Sybase, Oracle, MySQL and NoSQL databases such as MongoDB, CouchDB. In fact, the ‘dictionary’ data structure that Python supports is ideal for interacting with a NoSQL database such as MongoDB which processes documents as key-value pairs. Web frameworks written in Python such as Flask, Django facilitate faster web application building & deployment. It is also employed to process unstructured data or ‘Big Data’ & business analytics. Notable to mention are Web Scraping/Sentiment Analysis, Data Science and Text Mining. It is also used with R language in statistical modeling given the nice visualization libraries it supports, such as Seaborn, Bokeh, and Pygal. If you’re used to working with Excel, learn how to get the most out of Python’s higher-level data structures to enable super-efficient data manipulation and analysis.
Python is also a glue language by
facilitating component integration with many other languages. Integrating
Python with C++ or Dot net is possible through the middleman Numpy. Numpy, one
of the PyPI modules, acts as a bridge between other languages and Python. PyPI
is a growing repository of two hundred thousand modules. So, any developer can
check out PyPI before venturing out to write their code. There are also active
Python communities available to clarify our queries.
Companies of all sizes and in all
areas — from the biggest investment banks to the smallest social/mobile web app
startups — are using Python to run their business and manage their data,
especially because of its OSI-approved open source license and the fact that it
can be used for free. Python is not an option anymore but rather a de facto
standard for programmers & data scientists.
Oil prices were in for a shock
on a mid-September Monday morning, when two of Saudi Arabia’s oil production
facilities were attacked by multiple drones. Around 5% of the world’s oil
supply was hit and it took the US’ reserves opening news to calm the oil price
jump.
But the important question is
how powerful are drones? If the news that has come out is accurate, 10 drones
were successful against arguably the single most important piece of
infrastructure in the global oil industry, present in a country whose defence
spends are only next to the US and China.
Before we conclude how drones
could change the world, let us take a peek into the journey of drones.
The drone is the popular name
for unarmed aerial vehicles (UAV). These remotely controlled machines can fly
without any physical human intervention on board.
The first recorded use of a
UAV predates the actual airplane and goes as early as 1849 serving as a balloon
carrier for military purposes. The first truly successful example of non-crewed
remote-controlled aircraft was the de Havilland DH82B Queen Bee, which entered
service in Britain in 1935 and seems to have been the motivation for calling
such aircraft ‘drones’ (stingless male bees). Over time it was powered, used as
target practice, used for reconnaissance and data collection, as decoys and
finally in actual combat. The other documented use of drones includes package
delivery, in agriculture (spraying pesticides, insecticides), environment
monitoring, aerial photography and surveillance and during search and relief
operations.
The United Nations Institute
for Disarmament Research quoted a report saying that the global drone market is
expected to increase four times by 2022 from its 2015 value and surpass a net
worth of $22 billion, inclusive of both combat and non-combat drones in
military. Meanwhile, Goldman Sachs Research points out that the overall drone
market including the consumer drones will be around $100 billion by 2020, which
says the non-military market will be far bigger.
In the agriculture sector, drones are being used to
survey the health of the crops. Crop health can be assessed by using special
multispectral cameras to take pictures. The relative intensity of colour in
particular frequency bands is measured and thus identify undernourished and
diseased plants. This can be done without having a manual checking of the crops
which is more expensive and time-consuming. Satellite photography might also
not be as economic as drones. Later, a GPS enabled tractor can do the needful
and prevent chemical runoff.
The construction industry is
using it for “reality capture”. Thousands of photos are captured aerially and
then stitched and crunched together to make a 3D model. It is later matched
with the digital model to check the deviations between the construction and the
design. This helps to take corrective steps and also prevents errors. They are also
being used to measure the mining stockpiles, whose manual stockpiling has been
dangerous.
We can also take a look at how
drones will be used by the Indian Government. The Government is trying to map
the entire country with precision. It wants to use the results for marking
boundaries and help future property buyers. The State Government in India used
drones for search and rescue after the 2015 floods in Chennai. Drone
photography seems to be already in use in the entertainment industry and
weddings.
Drones could
revolutionize the delivery market. Amazon has already announced its ambitious
Prime Air that can deliver packages up to 5lbs within a 10-mile radius post-launch.
Dominoes has also announced their drone delivery plans. But we can see how
drones have helped save lives. San Francisco based Zipline took off in Rwanda
in 2016. It has now become a national on-demand medical drone network that is
used to deliver 150 medical products (blood and vaccines mostly), to places
that are difficult to reach. Thanks to the on-time delivery of blood, maternal
mortality rates are declining. This shows that in parts with difficult
accessibility drones can help save precious lives.
With further advancements of
AI, sensors, and cameras, usages of drones are only going to increase. But we
also need to remember that the drones created panic at Newark, Gatwick and
Heathrow. Flights had to be suspended. It also caused revenue loss and panic
attacks. The same advancements in technology can be implemented to limit the
damage of drones. Companies such as Indra with its Anti Drone system ARMS try
to jam a single or swarm of drones. Radars that identify drones pass on the
message to ARMS and later use infrared cameras to confirm and identify the type
of drone. Sensors then sweep the radio spectrum to determine what signals the
drone is using. This is followed by careful jamming that excludes all other
airfield machinery.
Various countries have their
own regulations. These regulations generally cover where they can be flown,
ranges, heights, sizes, types, etc. Some countries also demand registration. As
I write this article post the Saudi attack, governments and experts are
commentating on how ready they are for facing similar attacks. We will see many
countries going on a purchasing spree of anti-drone capabilities.
However, we might not see an
all-out ban on drones. Once technology hits the market, there is no stopping
it. With all the benefits that drones can provide, they complemented so many
industries in their use cases. Truly, drones have revolutionized photography,
videography, surveying, and logistics. Only time will tell whether these drones
priced at thousands of dollars can challenge defence mechanisms worth millions
of dollars and cause ruckus again.
“Anyone who stops learning is old,
whether at twenty or eighty. Anyone who keeps learning stays young. The
greatest thing in life is to keep your mind young.”
– Henry Ford
In times of unprecedented disruption and
high-velocity change, the path forward is through continuous learning, and
having the intellectual curiosity to remain relevant.
Gone are the days when you learned a profession
and practiced it throughout your entire career. There is a compelling need to
evolve. To learn, unlearn, to work and to repeat.
GAVS strongly advocates investment in its people
and unlocking their potential to innovate and drive growth.
GAVS, in partnership with GLIM (Great Lakes
Institute of Management), one of the Premier Business Schools in India (founded
by a Professor Emeritus at Kellogg School of Management) conducted a “Customer
Centric Leadership Program” for the Business Enablers at GAVS – Talent Acquisition, Talent
Management, Finance and Operations, and Administration Support.
It enabled the teams to discover important Leadership
Traits through Engaging and practical sessions with clear focus on building
confidence and providing toolkits to lead others. Since these are the teams
responsible in influencing the morale, productivity and health of the
workplace, many “real world” experiences, and examples prepared the
participants to identify their Leadership Strengths and Struggles.
Assertiveness
Prof. Keshav kindled the change agent in
each participant. He focused on how collaboration, self-trust, and flexibility can
lead the way to us being outcome focused and enable our decision making. The
journey of being Assertive commenced with role plays, and situational case
studies on being self-aware, self-regulated, self-motivated to build Effective Relationships
on the bedrock of empathy.
Influencing for Effective Leadership
Prof. Suresh Varghis, emphasized on how leaders listen to connect, and provided influencing framework in personal leadership. He also challenged us with thinking patterns of a creator. Further he provided us with tools, and skills to strengthen influencing. The participants learned sociograms to map stakeholder relationships, based on trust.
One of the exciting exercises was to create
a personal vision board, and while the participants created a personal vision statement,
the need to be aligned to an organization’s vision, and making it real, was brought
to the fore. Humanizing GAVS’ vision statement, brought out the GAVS DNA in
each of the participants. The session closed with an understanding of How to
Influence with Integrity.
Process Excellence
Prof. Keshav along with Prof. Venu, emphasized that there is a process in every walk of our life and we subconsciously follow the process in every single activity we do. They then went on and spoke about why process improvement brings in better efficiency and productivity, better customer and employee experience, and better management of risk to enable profitability.
Teams discussed on observing processes at a
function level, assessing the impact on stakeholders to mitigate risk, and
maximizing profitability, to bring in improvements.
Emotional Intelligence
It was “Back to School” days for the
participants, as this session was on campus, and the participants were back to
a classroom to understand Emotional Intelligence at work.
Prof. GN. Radhakrishnan made us realise the
importance of how our Interpretations play a major role in the way we react or
respond to a situation also stressing on the importance of not suppressing our
emotions. He further discussed about the ability to manage emotions at the workplace,
with focus on emotional literacy – the facets and process of emotions. After
multiple role plays, he discussed with the participants on the various tools
that would help us in managing emotions.
Such learning opportunities enable real-time on-demand content for heightened self-awareness.
The topics chosen for the program were pertinent to our area of work and the sessions were structured to first give us an understanding of the topic, evaluate where we stand and finally give us tools to improve. The professors ensured that the group was participative and all of us went back at the end of each module with at least one takeaway for us to work on and improve professionally. The revelations we experienced were tremendous and have surely motivated all of us to look at the way we do our work differently. As a group, it gives us confidence that Together we can achieve greater heights. It is a matter of great pride for all of us to be jointly certified by Great Lakes Institute of Management and GAVS.
“The pace
of progress in artificial intelligence is incredibly fast. Unless you have
direct exposure to groups like Deepmind, you have no idea how fast—it is
growing at a pace close to exponential.” – Elon Musk
“A
year spent in Artificial Intelligence is enough to make one believe in
God.” – Alan Perlis
It wasn’t so long ago that humanoid robots were merely a fantasy. Not anymore. Now there are advanced humanoid robots like Sophia, Atlas and Talos which can imitate human gestures, do a variety of search and rescue tasks and can also operate power tools. Sophia has appeared on various talk shows like the Tonight Show with Jimmy Fallon, and in some music videos, one of which had her as the lead female character. She also has Saudi Arabian citizenship, becoming the first robot to have a nationality. This sparked interesting conversations on whether she could vote or marry, or whether a deliberate system shutdown could be considered murder. A video of Atlas robot was also released to YouTube where he was seen performing backflips and practicing robot parkour.
On August 22,
2019, Russia successfully launched a Soyuz spacecraft with no human crew but a
humanoid robot, Skybot S850, originally known as FEDOR (Final Experimentation
Demonstration Object Research). However, it was not the first robot to go into
space. In 2011, NASA sent up Robonaut 2, which returned to Earth in 2018.
Artificial intelligence
has become so commonplace that we don’t even realise that we are using it every
day. Be it web searches on Google, product suggestions on Amazon, music
recommendations on Spotify or just booking a cab on Uber. We all have had
enough discussions on how artificial intelligence can change everything around
us. But how far have we reached in this journey?
On July 31,
2019, The Verge published an article
about an initiative called ‘Sentient’ which was presented at the 35th
Annual Space Symposium in Colorado Springs, CO. A product of the National
Reconnaissance Office (NRO), Sentient is a fully integrated intelligence system
that can coordinate satellite positions and may soon be used to manage
battlefield operations during military engagements. In simpler words, American
intelligence agencies have been developing a top-secret ‘Artificial Brain’ Military
AI System. Research related to Sentient have been ongoing since 2010. Until
now, it had been treated as a government secret, except for a few indirect
references in speeches and presentations. Government
officials are still tight-lipped on what the AI system can do and how it will
be used in future conflicts.
So, what
really are Artificial Brains? Artificial Brains are man-made software and
hardware that are just as intelligent, creative, and self-aware as humans.
These machines can function similar to an animal or a human brain. Not just that,
there have also been researches about Whole Brain Emulation (WBE). Also known
as ‘mind uploading’ or ‘mind copying’, WBE is a futuristic process of scanning
the mental state of a particular brain and copying it to a computer. This
computer then can process information and respond in essentially the same way
as the original brain would do. So, the knowledge and intelligence of anyone
can be preserved and used forever, even after the death of that person. Artificial
Brain and WBE has been a theme in many works of science fiction. Some examples
include Movies like Star Trek, Transcendence and Captain
America, and television shows like Warehouse 13,Simpsons and
Black Mirror.
In May 2005, the Brain and Mind Institute of École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland founded a project called, ‘The Blue Brain Project’. This project aims to create a digital reconstruction of rodent and eventually human brains by reverse-engineering mammalian brain circuitry. The project is headed by the founding director Henry Markram—who also launched the European Human Brain Project (HBP) which is a ten-year scientific research project that aims to advance knowledge in the fields of neuroscience, computing, and brain-related medicine. “It is not impossible to build a human brain and we can do it,” he said at a TED Global conference in Oxford, “…and if we do succeed, we will send a hologram to TED to talk”. In 2018, Blue Brain Project released its first digital 3D brain cell atlas of every cell in the mouse brain which provides information about major cell types, numbers, and positions in 737 regions of the brain. This can accelerate the progress in brain science massively. While Blue Brain is able to represent complex neural connections on a large scale, the project does not achieve the link between brain activity and behaviours executed by the brain.
Joshua Blue
is another project under development by IBM that focuses on designing and
programming computers that can think like humans. It is said to acquire
knowledge through external stimuli present in its environment, similar to how
children learn human traits through interacting with their surroundings. IBM
has not yet released any significant information regarding how Joshua Blue will
physically gather information, but they have revealed that it will be a
computer with a network of wires and input nodes that function as a computer
nervous system. This nervous system will allow the machine to interpret the
significance of events. Other than Joshua Blue, IBM is attempting to imitate
the common functions of a human brain through two of their other projects —
Deep Blue, a logic-based chess playing computer, and Watson, a
question-driven artificial intelligence software program.
There is an
ongoing attempt by neuroscientists to understand how a human brain works with
the goal of having something known as Strong AI. Given current trends in
neuroscience, computing, and nanotechnology, it is likely that artificial
general intelligence will emerge soon, possibly by the 2030s.
Strong AI
has been a controversial topic as some believe it can be inherently dangerous. This
may lead to human cloning and it cannot be estimated how big this threat might
be against nature. A major problem in this is that an unfriendly artificial
intelligence system is likely to be much easier to produce than a friendly
system. Some public figures, such as Theoretical physicist, Stephen Hawking,
Microsoft founder, Bill Gates and Tesla and SpaceX CEO, Elon Musk, have
advocated research into precautionary measures to ensure future super intelligent
machines remain under human control and ‘AI Takeover’ remains a hypothetical
situation.
On the
other hand, there are people who are a major proponent of AI like Facebook
founder, Mark Zuckerberg. There are many examples for the use cases of
Artificial Brain, one of which could be driverless cars. The world’s largest
car makers are investing in technologies which could replace a human driver. But
no matter how safe driverless cars become, the AI driver will eventually face
the moral dilemma of having to decide whether to prioritize the safety of its
passengers or others who might be involved in a collision. In such cases, the
artificial brain can help make decisions to have more capacity to deal with
these kinds of ethical conundrums.
Whatever
the future holds for AI, it sure is to evolve our lives to the next level.
Senior
Director Software Engineering, Premier Inc.
A couple of months ago my husband and I
signed up for a few tango lessons at a local studio. Well, one thing for sure,
we didn’t become the world’s best dancers. But through the dance, I learned a
beautiful lesson in leadership. Or rather, the opposite of leadership – the art
and joy of being a follower.
Continue reading to learn about simple
principles that apply everywhere in life and at work, that will help you become
a better follower and also a better leader.
Social tango is danced in clubs and studios
all around the world. It’s a dance of close embrace, tender connection, and
quiet understanding. If you’re picturing a passionate artistic performance in
which the partners look away from each other, that’s a different tango.
Traditionally tango is danced by a man and
a woman. Today the distinction is not very important, but what still matters is
the fact that two people dance two different dances, one by a leader, and
another by a follower. And yet, they do it together in a perfectly coordinated
unison.
As I was learning the lady’s steps, several things struck me as fascinating. First, the follower almost always moves backward. She cannot see where she goes, so she can basically close her eyes and be mesmerized by the music (which a lot of people do). She fully trusts her partner (the leader) to create a path on a crowded dance floor, away from other couples, walls, and furniture. For some reason, our path inevitably led into a corner full of stacked chairs.
Second, even though the dancers hold each
other very closely, they don’t lean on each other, each person fully stable and
perfectly balanced without their partner’s support.
Third, they communicate without words.
There’s no choreography. The entire dance is an improvisation, created by the
leader on the fly, as he’s listening to the music and navigating his partner
across the floor. The follower has no idea what dance she will be dancing. And
yet, the understanding between two good dancers is amazing. They communicate by
slight shifts of their weight, sending clear messages and making sure the
partner understands their intentions.
And the most unexpected discovery about
being a follower was that she is, in fact, the most important person in this
dance.
You know how we always want to be leaders?
We start teaching kids about leadership as soon as they enter preschool as if
it’s the most important role in the world. Nobody teaches us to be followers.
Or to appreciate followers. Or to even see them for what the truly are.
But watch this video (and by the way, this
is what social tango looks like, not that clumsy walking my spouse and I were
doing). https://youtu.be/7-i1glazW_I
Gender roles aside, in this 3-minute dance, the
leader is a dark blob, carefully moving around the dance floor. He watches the
crowd and figures out a safe path. He listens to the music beat and initiates
the steps. He shapes an outline of the dance by suggesting the direction to his
partner, the follower.
Because it is really her dance. It is her grace, musicality, and beautiful footwork that we can’t take our eyes off. All we see is her lovely face, graceful arms, strong back, beautiful legs, elegant shoes. She creates the dance, she lives in the music, while he builds a safe space for her to be her best. He is modest, she is a show-off. She respects his vision, and together they co-create, keep each other balanced and grounded, and allow each other to fully express themselves.
We always want to be leaders, but let’s not forget
that the real fun is in being a follower. Only as followers we actually create
something meaningful. As leaders, we only suggest the direction and make sure
the route is clear and safe. Then we stand a little to the side and let
everybody see what a beautiful dance our followers have created.
The truth is, in life and at work, we are always
both leaders and followers. To be a good leader, we have to understand what it
means to be a great follower. But most of all, we should take a moment and
appreciate the joy and fun of being a follower. The freedom to create and
generate ideas. An abundance of opportunities to turn ideas into reality. A
chance to get into the zone, focus and, work. The sense of accomplishment in
seeing things getting done.
Never in the history of mankind has the
velocity of business been increasing at such a rapid pace. This is largely
driven by the rise of digital business models enabled by an exponential growth
in technology. There is a plethora of information describing that we are in the
Fourth Industrial Revolution — and soon to be entering the fifth.
What if we are actually in the midst of
moving out of the industrial revolution and entering the next stage in our
development? We just don’t know what to call it yet! I’m sure when mankind
started moving from the agricultural age to the industrial age, they didn’t
know what to call it either.
There are three major drivers that point to
a move to the next age: the exponential growth in technology, a gap in human
adoption of that technology and dramatic changes to how companies operate in
order to transform from traditional business to digital business. In order to
make this journey, it’s critical to understand the difference between
traditional and digital business.
The Digital Flow Framework
As someone who co-founded a company to work
with enterprises to develop digital strategies and roadmaps, I created the digital
flow framework to bridge the gap in understanding between traditional and
digital business, with the goal of accelerating the journey to digital. This
framework is based on a metaphor of speed. So, allow me to go off on a slight
tangent and describe the three levels of speed:
Speed 1: The fastest person to
climb Mount Everest was ultra-runner Kilian Jornet, who went roughly 0.5 mph to
accomplish his goal.
Speed 2: A jet stream is
significantly faster, reaching highs of around 250 mph.
Speed 3: Solar winds move up to
1 million mph.
Traditional business is anchored firmly on
the ground with changes in business models being reliant on Traditional
technology that takes quarters and years to implement.
Digital business today is sitting in a jet stream,
with an ability to test and change business models in short periods of time and
implement business changes in weeks and even days when digital technology is
fully adopted. All the technology in the jet stream area of the diagram is
mature enough to implement today.
And up in the future of business are a number of exponential technologies that are in the development phase, being worked on by scientists, engineers, and makers. These technologies have the potential to change every aspect of the world as we know it, and they will make our current way of doing business look relatively slow.
Digital Flow Framework Business Outcomes
And Recommended Actions
Realizing what can be achieved at each
level of the framework is a catalyst to drive action. Future business will be
characterized by real-time business models, fully automated processes and rapid
product releases resulting in frictionless business. The action to take now for
most companies is to watch this space. Companies that plan on being a provider
of the technologies at this level should be actively researching and developing
their solutions.
Digital business is characterized by a near-frictionless business when technology and new ways of working are fully adopted. Full adoption enables rapid changes to business models and process velocity with a very high speed to value creation. The action for all companies to take is to invest heavily in digital transformation.
At the speed of traditional business,
business models are relatively static with a lot of self-induced complexity and
constrained processes leading to a slow time to value for new capabilities. The
action to take here is to freeze investments, divest in old ways of working and
technology and use the freed-up investment to invest in digital.
Three Components To Master The Digital
Flow Framework And Achieve Digital Velocity
There are three components to achieve digital velocity and flow: talent, operations, and technology. Staring with talent, it’s important to move from a scarcity mentality about a war on talent to an abundance mindset. Applying talent capacity planning using digital process accelerators is key.
In terms of employees, the time is now to utilize artificial intelligence (AI) to augment the talent pool, eliminating redundant and data-heavy workloads and speed up continuously. The other part of talent capacity planning is a workforce that expands and contracts through alternate talent pools such as crowdsourcing, freelancing, and the gig economy, the use of joint ventures with customers and suppliers, and partnering with universities, customers and other companies for IP generation.
The second component to master in order to achieve digital velocity is to deal with the significant changes required in operations. In digital business, there is a “customer in approach” with extreme customer-centricity. This means being committed to solving customers’ issues, providing a super experience and being willing to disrupt the company’s business model. From an internal organization standpoint, teams reorganize away from functional departments and into cross-functional teams working directly with customer microsegments. These teams have agile mindsets and methods and often use variations in the internal processes to cater directly to their customers. This is a very different approach from traditional business, where there is a “company out approach” in which work is accomplished in a waterfall fashion through siloed departments and the customer touchpoints are sales at the beginning and product and service delivery on the two ends.
The last component to master in order to
achieve digital velocity and flow is digital technology. At a minimum, it’s
important for everyone in the company to be aware that digital technology is
significantly different than traditional technology. Companies can’t achieve
digital velocity with traditional technology.
Stay tuned for part two of these series to learn more.
Conclusion
The move from traditional business to digital business is not about the destination — it’s about the journey. Since the end state keeps moving, becoming excellent in the journey is key. To achieve digital velocity and flow, it’s important to master: digital talent practices, digital operations, and digital technology.
New
technologies and new conversations – every piece of it is crucial as it
generates big data. And big data is critically responsible for creating
innovative value, consequently giving leeway to megatrends. Big data has transformed
the overall perspective of technology and its impact on mankind is massive. To
propel digital revolution with big data, enterprises displayed a need for
skilled data scientists, who can do justice to the usage of big data in
business. The sudden upsurge in the requirement for data scientists took the
global market by storm, displaying signs of disruption. The disruption altered
the process compelling organizations to rely on data-driven insights and thus imparting
training to relevant employees was crucial to run the show seamlessly.
Data
and Digital Revolution: A correlation
The world has witnessed technological diversity and the unprecedented volume of data generated in due course did increase clearly out of proportion. This has compelled companies to act proactively. The overflow of such data resulted due to customer explosion, a rise in business partners and unfathomable internal records. Secondary research suggests that by 2020, the pool of unstructured data will further increase due to a steep rise in smart connected devices. This forced companies to dedicate a specific budget and a team of skilled professionals to deal with such data and adopt proactive analytical measures to ensure the organization’s success. Embracing the digital revolution was the only breather to survive the cut-throat competition.
Who will help? Of course,
Data scientists!
Data science is the new order
of the day and thus data scientists are more in vogue. Research data suggests
that approximately 50,000 data science and machine learning jobs have emerged
in the market in the very recent past. The growing need for companies to
develop intuitive platform solutions, self-service data visualizations,
analytical and reporting tools led to mounting dependence on data scientists,
who helped bridge the gap between programming and implementation.
The Department of Statistics in the US points out that data scientists ought to be proficient in mathematics, statistics, and programming and in another year their demand is expected to soar high by 40% based on the current requirements. Recently, the demand for data scientists in India have also exploded beyond manageable limits. Compared to the market demand, the number of data science professionals grew only by a meagre 19%, indicating its huge potential in the market.
The job hype: Key drivers
External data sources reveal
that the demand for data scientists will increase by 28% by 2020. IT,
professional services, insurance, and finance are the few domains that will
witness a demand peak for data scientists. 2019 has already seen a rise in the demand
for data science and analytics (DSA) jobs by 59%. The key factors that
contributed to this rising demand include:
Currently, the most lucrative analytical skills in the market are MapReduce, Big data, Apache Pig, Machine Learning, Hive, and Hadoop. It is indeed rewarding as data scientists trained with aforesaid skills are usually the highest paid in the industry. Market data on the average salary of individuals with data science skills are cited below:
Skill sets are evolving, and
cutting-edge skills, in reality, are giving enterprises a new direction. This
disruption is a challenge for enterprises, owing to the exorbitant costs of
resources and impending unknown risks. Despite these allied challenges,
enterprises are still recruiting, training and integrating data scientists into
their existing business processes. Since data science is widely adopted by
companies across all industry types hence, the demand for data science
professionals has been massive. However, as the existing number of data
scientists is quite limited compared to the whopping demand, thus, the value
for this profile has heightened beyond imagination. It was further observed
that data scientists approximately earn $118,000 per year.
Coping with the rising demand
Secondary market research dived deep to discover that 43% of organizations across the world lacks appropriate skills to handle data science. On the other hand, only 5% of organizations could change their legacy approach so far, to attract data scientists. Also, the growing scarcity of data scientists compelled enterprises to train their existing resources to meet such expectations. Formal in-house training along with the integration of new talent and traditional data workers worked well with few organizations. Data sources confirm that approximately 63% of the organizations are garnering this approach and the advantage in training employees is that, they are already aware of the business and its mode of operation. However, amidst all of the pieces of training, organizations are yet to analyze the importance of analytical insights into business actions.
Successful implementation of
data science
Imbibing data science without a well-designed support system and guidance can yield negative results. Thus, it is essential to train the traditional data workers to merge with the new skills to derive something positive from it. Few organizations also created a set of new roles and responsibilities to cater to the process of merging. This also resulted in an alteration of organizational structure to meet the prerequisite of infusing data science skills in the system. An analytical translator is best poised to possess data science and analytical skills to give an enterprise the maximum advantage. This will enhance the scope of quantitative as well as qualitative analysis.
The success of data
scientists is more a function of teamwork rather than just individual
contribution. Hence, it demands a perfect blend of varied skills, expertise,
and experience. People from different disciplines and educational backgrounds
can be part of the data science revolution. According to a report generated by LinkedIn,
through extensive study of member profiles, job openings, and salaries, there has
been a 56% increase in job vacancies across the US for the position of the data
scientists. Even Glassdoor backed the claim, confirming that in the last three
years the demand for data scientists has considerably gone up.
Driving towards a better
future
The potential has been already unleashed. Data-driven business decisions have taken center-stage. As a result, enterprises are investing in the latest technologies to stay ahead in the competition and predict the future better. Clearly now, data scientists are considered as a new class of innovators with excellent business acumen and technical expertise. The data science technology will evolve every passing year, opening up more scope for rigorous investment in data science. Everyone now wants to revolutionize the thought of the future and become the force behind the change.