Cyber Security for Healthcare

Since the boom of technology in healthcare, doctors and hospitals are slowly being released from the need for physical proximity for care delivery. This has been accelerated by the big move towards remote care and telehealth due to the pandemic. The healthcare industry has also significantly benefited from connected devices such as remote patient monitoring devices, sensors, wearables, and records management software. According to a 2020 Forrester report, the number of connected devices makes up 74% of the devices in the healthcare industry. However, this modernization of healthcare comes with its pitfalls. The threat landscape is growing exponentially, with healthcare organizations moving beyond defined physical boundaries. With the surge in connected devices, medical device security is also now a global concern. For example, the 2017 WannaCry ransomware attack had a widespread impact on National Health Service hospitals in England and Scotland, with as many as 70,000 devices affected.

Recent years have witnessed a significant increase in the number of cyberattacks. It is reported that the healthcare industry is prone to cyberattacks two or three times more than other industries. The US Department of Homeland Security, the FBI, the Interpol, and the United Kingdom’s National Cyber Security Centre have issued several advisories to healthcare organizations on the rise in cyberattacks and ransomware.

Cyberattacks in Healthcare

Cybersecurity in healthcare includes protecting data and electronic assets from unauthorized access, use, or disclosure. Some of the common cyberattack routes that can lead to credentials misuse, data, and/or resource hijack include:

  • Ransomware through which the hacker can use malware to take control over individual devices or servers in exchange for money or other demands
  • Malicious websites that can collect data or hack the device
  • Phishing attacks through emails
  • Blind spots in encryptions that go undetected during inspections
  • Weak passwords and unencrypted devices

Despite the surge in cyberattacks, typically most healthcare organizations allocate only a minuscule portion of the total IT budget for cybersecurity. These attacks affect the delivery of patient care across healthcare facilities. In addition to the fact that sensitive private data gets compromised and can be misused, these incidents can harm patients as tampering of records can result in wrongful diagnoses or delays in treatment.

Cyber Security Strategy

The three goals of cybersecurity, also known as the ‘CIA triad,’ focus on protecting the confidentiality, integrity, and availability of information. Market research leaders such as Gartner & Forrester recommend that organizations within the government and the private sector take a collaborative, layered approach to protect patients and their data from cyber threats. To that end, the various aspects that healthcare industry players must focus on while preparing a cybersecurity strategy are:

  • Architecture analysis
  • Effectiveness of analytics and reporting
  • Preparation for attack
  • Threat research
  • Device visibility
  • Vulnerability management
  • Integrations
  • Vision
  • ­­Roadmap
  • Market approach

To plan an effective cybersecurity strategy companies must involve different teams including the CISO, CIO, infrastructure & application leaders & teams, security & risk management teams, etc. The different steps need to typically include alignment of strategy to organizational security & business goals, development of an action plan based on vulnerability assessment, board buy-in/resource backing, and policy framework, execution leveraging the right tools, technologies, and skillsets, program maturation through critical incident response, advanced analytics, and employee training/enablement, continuous reassessment & realignment through metrics & feedback, and required optimizations.

Cyber Security Planning

Although the above form the base to start a cybersecurity strategy, implementing recommended safety practices depends on the organization’s size, complexity, and type. The key factors can be categorized as health information exchange partners, required IT capabilities, cybersecurity investment required, healthcare service provider size, and service complexity. Once these factors are established, a cybersecurity system with at least the following components must be implemented:

  • Firewall – Build a robust firewall to protect the system from outside threats
  • Access categorization – Regulate admission or access to suspicious and infected websites to protect the system
  • Intrusion Detection System (IDS) – Use IDS to analyze inbound and outbound traffic based on traffic logs
  • Intrusion Prevention System (IPS) – Compliment IDS with IPS to control traffic based on the maliciousness of the user
  • Policy management – Develop a set of rules that helps strengthen the firewall security of the system.
  • Virus scanning capabilities – Implement antivirus systems such as Avast, McAfee, Norton to help improve protection against malware, spam, and phishing.
  • Security Information Event Management (SIEM) – SIEM helps manage and record attacks on the network.
  • Patching – Regular patches for computers and programs must be done without delay to avoid system compromise
  • Continuous end-user education – To build a network defense, the users must have a fair understanding of the different types of threats. Knowledge about trusted networks, password strength, and even email etiquette must be known to all users.
  • System updates – To reduce the risk of hacking and viruses, update the software to the latest version. Keeping software up-to-date will mitigate the attack of malware.

GAVS for Cyber Security

Leveraging the alliances of global technology leaders in Cyber Defense, Endpoint Security, IAM, and others, GAVS delivers superior AI-led cybersecurity services to proactively manage risk. From assessment, operations, and strategy, GAVS offers various services including:

  • Assessment and advisory services
  • Security operations
  • Digital identify services
  • Security project implementation
  • DevSecOps and cloud security

As the dependency on technology increases, robust cybersecurity is imperative to conduct day-to-day operations, protect data, and improve patient safety. The healthcare industry must prioritize cybersecurity initiatives from fiscal, technical, and operational standpoints by upgrading or replacing legacy systems, implementing cybersecurity awareness and training programs, conducting continuous end-to-end security risk assessments, increasing budgets, and most of all, considering cybersecurity an integral part of organizational strategy and not as a stand-alone initiative.

To learn more about GAVS cybersecurity offerings, please visit https://www.gavstech.com/service/security-services/.

Reimagining ITSM Metrics

Rama Periasamy

Rama Vani Periasamy

In an IT Organization, what is measured as success.? Predominantly it inclines towards the Key Performance Indicators, internally focused metrics, SLAs and other numbers. Why don’t we shift our performance reporting towards ‘value’ delivered to our customers along with the contractually agreed service levels? Because the success of any IT operation comes from defining what it can do to deliver value and publishing what value has been delivered, is the best way to celebrate that success.

It’s been a concern that people in service management overlook value as trivial and they often don’t deliver any real information about the work they do . In other words, the value they have created goes unreported and the focus lies only on the SLA driven metrics & contractual obligations. It could be because they are more comfortable with the conventional way of demonstrating the SLA targets achieved. And this eventually prevents a business partner from playing a more strategic role.

“Watermelon reporting” is a phrase used in reporting a service provider’s performance. The SLA reports depict that the service provider has adhered to the agreed service levels and met all contractual service level targets. It looks ’green’ on the outside, just like a watermelon. However, the level of service perceived by the service consumer does not reflect the ’green’ status reported (it might actually be ’red’, like the inside of a watermelon). And the service provider continues to report on metrics that do not address the pain points.  

This misses the whole point about understanding what success really means to a consumer. We tend to overlook valuable data and the one that shows how an organization as a service provider is delivering value and helping the customer achieve his/her business goals.

The challenge here is that often consumers have underdeveloped, ambiguous and conflicting ideas about what they want and need. It is therefore imperative to discover the users’ unarticulated needs and translate them into requirements.

For a service provider, a meaningful way of reporting success would be focused on outcomes rather than outputs which is very much in tandem with ITIL4. Now this creates a demand for better reporting, analysis of delivery, performance, customer success and value created.

Consider a health care provider, the reduced time spent in retrieving a patient history during a surgery can be a key business metric and the number of incidents created, number of successful changes may be secondary. As a service provider, understanding how their services support such business metrics would add meaning to the service delivered and enable value co-creation.

It is vital that a strong communication avenue is established between the customer and the service provider teams to understand the context of the customer’s business. To a large extent, this helps the service provider teams to prioritize what they do based on what is critical to the success of the customer/service consumer. More importantly, this enables the provider become a true partner to their customers.

Taking service desk as an example, the service desk engineers fixes a printer or a laptop, resets passwords. These activities may not provide business value, but it helps to mitigate any loss or disruption to a service consumer’s business activities. The other principal part of service desk activity is to respond to service requests. This is very much an area where business value delivered to customers can be measured using ITSM.

Easier said, but how and what business value is to be reported? Here are some examples that are good enough to get started.

1. Productivity
Assuming that every time a laptop problem is fixed with the SLA, it allows the customer to get back to work and be productive. Value can be measured here by the cost reduction – considering the employee cost per hour and the time spent by the IT team to fix the laptop.

How long does it take for the service provider to provide what a new employee needs to be productive? This measure of how long it takes to get people set up with the required resources and whether this lead-time matches the level of agility the business requires equates to business value. 

2. Continual Service Improvement (CSI)

Measuring value becomes meaningless when there is no CSI. So, measuring the cost of fixing an incident plus the loss of productivity and identifying and providing solutions on what needs to be done to reduce those costs or avoid incidents is where CSI comes into play.

Here are some key takeaways:

  • Make reporting meaningful by demonstrating the value delivered and co-created, uplifting your operations to a more strategic level.
  • Speak to your customers to capture their requirements in terms of value and enable value co-creation as partners.
  • Your report may wind up in the trash, not because you have reported wrong metrics, but it may just be reporting of data that is of little importance to your audience.   

Reporting value may seem challenging, and it really is. But that’s not the real problem. Keep reporting your SLA and metrics but add more insights to it. Keep an eye on your outcomes and prevent your IT service operations from turning into a watermelon!

References –

About the Author –

Rama is a part of the Quality Assurance group, passionate about ITSM. She loves reading and traveling.
To break the monotony of life and to share her interest in books and travel, she blogs and curates at www. kindleandkompass.com

The Hands that Rock the Cradle, also Crack the Code

Sumit Ganguli

On February 18, 2021, I was attending a video conference, with my laptop perched on my standing desk while I was furtively stealing a glance at the TV in my study. I was excitedly keeping up with the Perseverance Rover that was about to land at the Mars. I was mesmerized by space odyssey and was nervous about the ‘seven minutes of terror’ –  when the engineers overseeing the landing would not be able to guide or direct the Perseverance landing as it would take a while to establish or send any communication from Earth to Mars. Hence, the rover would have to perform a landing by itself, with no human guidance involved.

During this time, I thought I saw a masked lady with a ‘bindi’ on her forehead at the NASA control room who was, in her well-modulated American accented voice, giving us a live update of the Rover.

And since that day, Swati Mohan has been all over the news. We have got to know that Mohan was born in Bengaluru, Karnataka, India, and emigrated to the United States when she was one year old. She became interested in space upon seeing Star Trek at age 9. She studied Mechanical and Aerospace Engineering at Cornell University, and did her master’s degree and Ph.D. in Aeronautics and Astronautics at Massachusetts Institute of Technology.

Swati Mohan is the lead for the Navigation and Controls (GN&C) Operations for the Mars project. She led the attitude control system of Mars 2020 during operations and was the lead systems engineer throughout development. She played a pivotal part in the landing which was rather tricky.

This led me to ruminate about women and how they have challenged stereotypes and status quo to blaze the trail, especially in STEM.

I have been fascinated from the time I got to know that the first programmer in the world was a woman, and daughter of the famed poet, Lord Byron, no less. The first Programmer in the World, Augusta Ada King-Noel, Countess of Lovelace nee Byron; was born in 1815 and was the only legitimate child of the poet laureate, Lord Byron, and his wife Annabella. 

As a teenager, Ada’s prodigious mathematical talents, led her to have British mathematician Charles Babbage, as her mentor. Babbage is known as ‘the father of computers’. Ada translated an article on the Analytical Engine, which she supplemented with an elaborate set of notes, simply called Notes. These notes contain what many consider to be the first computer program—that is, an algorithm designed to be carried out by a machine. As a result, she is often regarded as the first computer programmer.

Six women—Francis “Betty” Snyder Holberton, Betty “Jean” Jennings Bartik, Kathleen McNulty Mauchly Antonelli, Marlyn Wescoff Meltzer, Ruth Lichterman Teitelbaum, and Frances Bilas Spence were associated with the programming of the first computer ENIAC. They had no documentation and no schematics to work with. There was no language, no operating system, the women had to figure out what the computer was, and then break down a complicated mathematical problem into very small steps that the ENIAC could then perform.  They physically hand-wired the machine, using switches, cables, and digit trays to route data and program pulses. This might have been a very complicated and arduous task. So, these six women were the programmers for the world’s mainframe computers.

The story goes that on February 14, 1946 The ENIAC was announced as a modern marvel in the US. There was praise and publicity for the Moore School of Electrical Engineering at the University of Pennsylvania, the inventors of ENIAC the first computer, Eckert and Mauchly were heralded as geniuses. However, none of the key programmers, all the women were not introduced in the event. Some of the women appeared in photographs later, but everyone assumed they were just models, perfunctorily placed to embellish the photograph.

One of the six programmers, Betty Holberton went on to invent the first sort routine and help design the first commercial computers, the UNIVAC and the BINAC, alongside Jean Jennings. These were the first commercial mainframe computers in the world.

It behooves us to walk down the pages of history and read about women who had during their time decided to #choosetochallenge and celebrate the likes of Swati Mohan who have grown tall on the shoulders of the first women programmers.

About the Author –

Sumit brings over 20 years of rich experience in the international IT and BPO sectors. Prior to GAVS, he served as a member of the Governing Council at a publicly-traded (NASDAQ) IT and BPO company for over six years, where he led strategic consulting, IP and M&A operations.

He has managed global sales and handled several strategic accounts for the company. He has an Advanced Professional Certificate (APC) in Finance from Stern School of Management, NYU, and is a Post Graduate in Management from IIM. He has attended the Owners President Management Program (OPM 52) at HBS and is pursuing a Doctorate in Business Administration at the LeBow School of Business, Drexel University.

He has served as an Adjunct Professor at Rutgers State University, New Jersey teaching International Business. He speaks at various industry forums and is involved in philanthropic initiatives like Artha Forum.

Privacy Laws – Friends not Foes!

Barath Avinash

“Privacy means people know what they’re signing up for, in plain language, and repeatedly. I believe people are smart. Some people want to share more than other people do. Ask them.” – Steve Jobs

Cyber Security and Compliance Services

However futile a piece of data is today; it might be of high importance tomorrow. Misuse of personal data might lead to devastating consequences for the data owner and possibly the data controller.

Why is Data Privacy important?

For us to understand the importance of data privacy, the consequences of not implementing privacy protection must be understood. A very relevant example to understand this better is the Facebook-Cambridge Analytica scandal which potentially led to canvassing millions of Facebook users for an election without users’ explicit consent. 

To answer one long standing argument against privacy is that “I do not have anything to hide and so I do not care about privacy”. It is true that privacy can provide secrecy, but beyond that, privacy also provides autonomy and therefore freedom, which is more important than secrecy.

How can businesses benefit by being data privacy compliant?

Businesses can have multifold benefits for complying, implementing, and enforcing privacy practice within the organization. Once an organization is compliant with general data privacy principles, they also become mostly compliant with healthcare data protection laws, security regulations and standards. This reduces the effort an organization has to go through to be compliant on several other security and privacy regulations or standards. 

How can businesses use privacy to leverage competition?

With privacy being one of the highly sought out domain after the enactment of GDPR regulation for the EU followed by CCPA for USA and several other data protection laws around the world, businesses can leverage these for competitive advantage rather than looking at privacy regulations as a hurdle for their business and just as a mandatory compliance requirement. This can be achieved by being proactive and actively working to implement and enforce privacy practices within the organization. Establish regulatory compliance with the customers by means of asking for consent, being transparent with the data in use and by providing awareness. Educating people by providing data user centric awareness as compared to providing awareness for the sake of compliance is a good practice and thus will result in increasing the reputation of the business.

Why is privacy by design crucial?

Business should also focus on operations where implementing ‘privacy by design’ principle might build a product which would be compliant to privacy regulations as well as security regulations and standards through which a solidly built future proof product could be delivered.

The work doesn’t stop with enforcement and implementation, continual practice is necessary to maintain consistency and establish ongoing trust with customers.

With increasing statutory privacy regulations and laws in developed countries, several other countries have been either planning to enact privacy laws or have already started implementing them. This would be the right time for businesses located in developing countries to start looking into privacy practice so that it would be effortless when a privacy law is enacted and put into enforcement.

What’s wrong with Privacy Laws?

Privacy laws that are in practice come with their fair share of problems since they are relatively new.

  • Consent fatigue is a major issue with GDPR since it requires data owners to consent to processing or use of their data constantly, which tires the data owner and results in them ignoring privacy and consent notices when sent by the data processor or data collector.
  • Another common issue is sending multiple data requests by ill-motivated malicious users or automated computer bots to the data collector in order to bombard them with requests for data owner’s data which is available with the controller, this is a loophole under the ‘right to access’ of GDPR which is being exploited in some cases. This will burden the data protection officer to cause delay in sending requested data to the customer thus inviting legal consequences.
  • Misuse of privacy limitation guidelines are also a major problem in the GDPR space, time and again data collectors provide data processing purpose notice to data owners and subsequently use the same data for a different purpose without receiving proper consent from data owner thus often violating the law.

What the future holds for privacy?

As new privacy laws are in works, better and comprehensive laws will be brought in, learning from inconveniences of existing laws. Amendments for existing laws will also follow to enhance the privacy culture.

Privacy landscape is moving towards better and responsible use of user data, as the concept of privacy and its implementation matures with time, it is high time businesses start implementing privacy strategies primarily for business growth rather than merely for regulatory compliance. That is the goal every mature organization should aim towards and work on.

Privacy is firstly a human right; therefore, privacy laws are enacted on the basis of rights, because laws can be challenged and modified under court of justice, but rights cannot be.

References:

https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.htm

https://iapp.org/news/a/fake-dsars-theyre-a-thing/

About the Author –

Barath Avinash is part of GAVS’ security practice risk management team. He has a master’s degree in cyber forensics and information security. He is an information security and privacy enthusiast and his skillet include governance, compliance and cyber risk management.

5 Leadership Lessons from the Pandemic to Kickstart your Technology Career in 2021

Jane Aboyoun, CIO, SCO Family of Services

Life is not without its ironies. While the pandemic turbo-charged our dependence on technology for day-to-day activities like never before, it also clarified the importance as a leader to be thoughtful and strategic – to take a step back before leaping into the fray.  Here are 5 lessons that helped me navigate the COVID crises that I believe we can all benefit from carrying forward into 2021 and beyond.

  1.  Slow Down to Speed Up

The necessity of responding effectively to COVID-19 as a Tech Chief compelled me to use my expertise to quickly identify technology solutions that would have an impact for my clients.  While responsiveness in an uncertain climate is essential, it’s actually a strong technology foundation that allows agility and creates ballast for organizations looking to gain competitive advantage in uncertain times.  

Lesson #1 is therefore that while it may not be as inspiring as the latest app, focusing on the “blocking and tackling” and building a strong technology foundation enables agility and re-invention.  As a CIO, I constantly balance possible change opportunities with the readiness of my clients to accept that change. Knowing how far to push my clients is a key part of my role. Just because a technology is available, doesn’t always mean it’s right for them.  Always consider how a new technology fits within the foundation.

  1. Don’t Reinvent the Wheel

My role as the CTO of the New York Public Library proved to be a great training ground in how to manage the complexity of upgrading infrastructure, moving applications to the cloud, and building a digital repository. I devised a three-part strategy for the transformation. First, I had to upgrade the aging infrastructure. Second, I had to move the infrastructure and the applications into the cloud, to improve our resiliency, security, and functionality. The third was to figure out how to preserve the library’s physical assets which were expiring from age. We decided to digitize the assets to permanently preserve them. Within 5 years, the repository had over a Petabyte of assets in it and was continuing to grow. These resulted in a world-class computing environment, moving a beloved, trusted, public city library into the digital 21st century that can be accessed by future generations.  Lesson # 2 – the secret to our success at NYPL was that the technology platforms and applications we used were all developed by best-of-breed providers.  We recognized that we were in the data business rather than the R&D business, and as such, didn’t build anything ourselves.  Instead, we took pride in working with and learning from industry leaders.

  1. Future-Proof Your Thinking

The pace of change is so much more rapid than it was even five years ago. Being able to recognize that the landscape is evolving, pivot at speed, and adopt new technology within the organization is now an essential skillset for technology leaders.  I am personally excited about the ‘internet of things’ (IoT) and the data that is being collected at the edge which will be enhanced by 5G capabilities. Also, AI and ML are on the cusp of making a ‘next level’ leap. I think there are lots of good applications of it, we just need to figure out how to use them responsibly.  Lesson # 3 is that as a technology leader, we need to be constantly looking around corners and to remain open-minded and curious about what’s next.  It is important for all leaders and aspiring leaders to ask questions; to challenge the status quo. 

  1. The Human Factor Remains a Top Priority

New technology comes with its own set of challenges. I believe the issue of privacy and security to be the most pressing. Data is being collected everywhere and often has proved to be more valuable that the platform it sits on. Hence, it is paramount to understand evolving data and privacy standards, as well as how to secure it and identify breaches. Then there are also moral and ethical issues around AI. While the opportunities are limitless, it is of utmost importance that we maintain our moral and democratic compass and that we apply technology in a way that benefits society. Lesson # 4 is that while it’s challenging to get the balance between innovation, opportunity, and ethics right, it’s a battle worth fighting.

  1. Facts Matter – Strive for Balance

Another issue for me is information overload.  Knowing what is real and what isn’t, has never been more important. This is where go-to trusted news and academic sources come into play. Two influencers I follow are Dan Fagella from EMERJ and Bernard Marr.  Both Dan and Bernard focus on AI and it’s motivating to hear and read what they have to say. I also read the technology review from MIT and listen to several technology podcasts.  Lesson # 5 is that it’s critical to continue to seek knowledge and to make a point of agnostically learning a lot from other technologists, business-people, and vendors.   Doing your own research and triangulation in the age of ‘alternative facts’ ensures that you stay informed, relevant and are able to separate fact from fiction.

In summary, as we enter the ‘Next Normal’, I anticipate that the pace of change will be faster than ever.  However, it’s important to remember that it’s not technology that leads the way, it’s people.  Staying in touch with technology trends and solutions is obviously important, but so is staying in touch with your values and humanity.  At the end of the day, technology is just an enabler and it’s the human values we apply to it that make the difference in how impactful it will be.

About the Author –

Jane Aboyoun is the Chief Information Officer at SCO Family of Services, a non-profit agency that helps New Yorkers build a strong foundation for the future. In this role, Jane is responsible for leading SCO’s technology strategy, and managing the agency’s technology services to support business applications, architecture, data, engineering, and computing infrastructure.

As an accomplished CIO / CTO, Jane has spent 20 years in the C-suite in a variety of senior technology leadership roles for global, world-class brands such as Nestlé Foods, KPMG, Estēe Lauder Companies, Walt Disney Company, and the New York Public Library.

From Good to Great – DNA of a Successful Leader (PART II)

Rajeswari S

Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others” – Jack Welch

In my previous article, I wrote about a few qualities that make for a good leader. In this article, I discuss a few ways in which a leader can become great from good.

  1. Seek to understand and be understood: Seeking feedback and taking criticisms is not an easy task for anyone. When you are holding a leadership position and people look up to you, it is even more difficult. But a true leader does exactly that and does it HONESTLY. A good leader focuses on the needs of others. When you are open to feedback and constructive criticism, you have the right to give the same to others. Make genuine efforts to listen when your team speaks. Great leaders listen first, speak second.
  1. Be there: Being there is just not about being the center of attention. You need to be there for your people during critical times and help members across your organization find solutions to roadblocks. Mentorship is an art. Your people should accept you as their mentor and gaining that space is not as easy.
  1. Demonstrate empathy and compassion: This quality is an extension of the previous point. When you are laser-focused on your goals, it can be difficult to focus on the needs of others around. You need to know not only how your actions affect people, but what you need to do to show understanding and sympathy for others.
  1. Get curious: Leaders are often driven with an insatiable desire to learn; they push the limits of what’s possible and explore opportunities as a continuous process. Expanding your mind can often be as simple as reading and asking ‘why’ more often. Curiosity can help you to get to the root of a problem and promote better ideas and thoughts. Leaders think and embrace others’ ideas. A correctly asked question with the right intention could lead to many opportunities and achievements.
  1. Be in the know: Leaders go out of their way to stay educated and up-to-date. Intentional learning is a continuous process of acquiring, understanding information with the goal of making yourself more intelligent and prepared on a specific subject. People cannot always see your work, it is how you talk that creates the first impression. When you make an informed or up-to-date speech, you get the edge over others.
  1. Enjoy the ride: Smart leaders know that their journey is often more rewarding than their destination. Which is why they take the time to enjoy life and what they have already achieved because they know nothing can last forever. When you can enjoy the journey, you’ll be amazed by what you can learn. A great leader embraces each day as an experience. They grow every day!
  1. Celebrate and Connect: Leaders working toward a brighter future share their success with the people they care about – business partners and customers, family and friends, employees, and their families, etc. Great leaders celebrate other’s victory as their own; this creates a high-performing team and culture. A true captain takes time to know about the people around her and their lives. It goes a long way in running not only a successful business but a happy one too!
  1. Pursue new experiences: Mountains are interesting to watch and hike. Why? Because of its rugged terrain and unpredictable nature. Straight roads are boring, that is why we sleep on a highway drive! An intelligent leader is never complacent and constantly pushes himself out of his comfort zone. To stay prepared for any bumps along the road, leaders actively pursue new experiences that allow them to learn and grow. From starting a new venture to coaching a little league to diversifying the business.

Unique brands of Leadership

A quick look at successful CEOs, new-age entrepreneurs, and their unique leadership mantras:

Ø  Sundar Pichai, CEO, Alphabet Inc. and its subsidiary Google LLC

Leadership mantra:

  1. Never forget your roots
  2. Focus more on others’ success than your own
  3. Empower the youth
  4. Stay humble and keep learning

Ø  Bill Gates, Founder, Microsoft

Leadership mantra: 

  1. Knowledge is different from wisdom
  2. Take a step-by-step approach to make progress towards your vision
  3. Empower people to create new opportunities to explore ideas; Embrace creativity
  4. Be caring and passionate

Ø  Suchi Mukherjee, CEO, Limeroad, an Indian online marketplace
Leadership mantra: True leadership is about enabling the voice of the youngest team member.

Ø  Amit Agarwal, CEO, NoBroker, a real estate search portal
Leadership mantra: Leaders provide employees the opportunity to be leaders themselves.

References   

About the Author –

Rajeswari is part of the IP team at GAVS. She is involved in technical and creative content development for the past 13 years. She is passionate about music and writing and spends her free time watching movies or going for a highway drive.

 

Container Security

Anandharaj V

We live in a world of innovation and are beneficiaries of new advancements. New advancements in software technology also comes with potential security vulnerabilities.

‘Containers’ are no exception. Let us first understand what a container is and then the vulnerabilities associated with it and how to mitigate them.

What is a Container?

You might have seen containers in the shipyard. It is used to isolate different cargos which is transported via ships. In the same way, software technologies use a containerization approach.

Containers are different from Virtual Machines (VM) where VMs need a guest operating system which runs on a host operating system (OS). Containers uses OS virtualization, in which required processes, CPU, Memory, and disk are virtualized so that containers can run without a separate operating system.

In containers, software and its dependencies are packaged so that it can run anywhere whether on-premises desktop or in the cloud.

IT Infrastructure Managed Services

Source: https://cloud.google.com/containers

As stated by Google, “From Gmail to YouTube to Search, everything at Google runs in containers”.

Container Vulnerabilities and Countermeasures

Containers Image Vulnerabilities

While creating a container, an image may be patched without any known vulnerabilities. But a vulnerability might have been discovered later, while the container image is no longer patched. For traditional systems, it can be patched when there is a fix for the vulnerability without making any changes but for containers, updates should be upstreamed in the images, and then redeployed. So, containers have vulnerabilities because of the older image version which is deployed.

Also, if the container image is misconfigured or unwanted services are running, it will lead to vulnerabilities.

Countermeasures

If you use traditional vulnerability assessment tools to assess containers, it will lead to false positives. You need to consider a tool that has been designed to assess containers so that you can get actionable and reliable results.

To avoid container image misconfiguration, you need to validate the image configuration before deploying.

Embedded Malware and Clear Text Secrets

Container images are collections of files packaged together. Hence, there are chances of malicious files getting added unintentionally or intentionally. That malicious software will have the same effect as of the traditional systems.

If secrets are embedded in clear text, it may lead to security risks if someone unauthorized gets access.

Countermeasures

Continuous monitoring of all images for embedded malware with signature and behavioral detection can mitigate embedded malware risks.

 Secrets should never be stored inside of containers image and when required, it should be provided dynamically at runtime.

Use of Untrusted Images

Containers have the advantages of ease of use and portability. This capability may lead teams to run container images from a third party without validating it and thus can introducing data leakage, malware, or components with known vulnerabilities.

Countermeasures

Your team should maintain and use only trusted images, to avoid the risk of untrusted or malicious components being deployed.

Registry Risks

Registry is nothing but a repository for storing container images.

  1. Insecure connections to registries

Images can have sensitive information. If connections to registries are performed over insecure channels, it can lead to man-in-the-middle attacks that could intercept network traffic to steal programmer or admin credentials to provide outdated or fraudulent images.

You should configure development tools and containers while running, to connect only over the encrypted medium to overcome the unsecured connection issue.

  1. Insufficient authentication and authorization restrictions

As we have already seen that registries store container images with sensitive information. Insufficient authentication and authorization will result in exposure of technical details of an app and loss of intellectual property. It also can lead to compromise of containers.

Access to registries should authenticated and only trusted entities should be able to add images and all write access should be periodically audited and read access should be logged. Proper authorization controls should be enabled to avoid the authentication and authorization related risks.

Orchestrator Risks

  1. Unbounded administrative access

There are many orchestrators designed with an assumption that all the users are administrators but, a single orchestrator may run different apps with different access levels. If you treat all users as administrators, it will affect the operation of containers managed by the orchestrator.

Orchestrators should be given the required access with proper role-based authorization to avoid the risk of unbounded administrative access.

  1. Poorly separated inter-container network traffic

In containers, traffic between the host is routed through virtual overlay networks. This is managed by the orchestrator. This traffic will not be visible to existing network security and management tools since network filters only see the encrypted packets traveling between the hosts and will lead to security blindness. It will be ineffective in monitoring the traffic.

To overcome this risk, orchestrators need to configure separate network traffic as per the sensitivity levels in the virtual networks.

  1. Orchestrator node trust

You need to give special attention while maintaining the trust between the hosts, especially the orchestrator node. Weakness in orchestrator configuration will lead to increased risk. For example, communication can be unencrypted and unauthenticated between the orchestrator, DevOps personnel, and administrators.

To mitigate this, orchestration should be configured securely for nodes and apps. If any node is compromised, it should be isolated and removed without disturbing other nodes.

Container Risks

  1. App vulnerabilities

It is always good to have a defense. Even after going through the recommendations, we have seen above; containers may still be compromised if the apps are vulnerable.

As we have already seen that traditional security tools may not be effective when you use it for containers. So, you need a container aware tool which will detect behavior and anomalies in the app at run time to find and mitigate it.

  1. Rogue containers

It is possible to have rogue containers. Developers may have launched them to test their code and left it there. It may lead to exploits as those containers might not have been thoroughly checked for security loopholes.

You can overcome this by a separate environment for development, test, production, and with a role-based access control.

Host OS Risks

  1. Large attack surface

Every operating system has its attack surface and the larger the attack surface, the easier it will be for the attacker to find it and exploit the vulnerability and compromise the host operating system and the container which run on it.

You can follow the NIST SP 800-123 guide to server security if you cannot use container specific operating system to minimize the attack surface.

  1. Shared kernel

If you only run containers on a host OS you will have a smaller attack surface than the normal host machine where you will need libraries and packages when you run a web server or a database and other software.

You should not mix containers and non-containers workload on the same host machine.

If you wish to further explore this topic, I suggest you read NIST.SP.800-190.


References

About the Author –

Anandharaj is a lead DevSecOps at GAVS and has over 13 years of experience in Cybersecurity across different verticals which include Network Security, application Security, computer forensics and cloud security.

Customer Focus Realignment in a Pandemic Economy

Ashish Joseph

Business Environment Overview

The Pandemic Economy has created an environment that has tested businesses to either adapt or perish. The atmosphere has become a quest for the survival of the fittest. On the brighter side, organizations have stepped up and adapted to the crisis in a way that they have worked faster and better than ever before. 

During this crisis, companies have been strategic in understanding their focus areas and where to concentrate on the most. From a high-level perspective, we can see that businesses have focused on recovering the sources of their revenues, rebuilding operations, restructuring the organization, and accelerating their digital transformation initiatives. In a way, the pandemic has forced companies to optimize their strategies and harness their core competencies in a hyper-competitive and survival environment.

Need for Customer Focused Strategies

A pivotal and integral strategy to maintain and sustain growth is for businesses to avoid the churn of their existing customers and ensure the quality of delivery can build their trust for future collaborations and referrals. Many organizations, including GAVS, have understood that Customer Experience and Customer Success is consequential for customer retention and brand affinity. 

Businesses should realign themselves in the way they look at sales funnels. A large portion of the annual budget is usually allocated towards the top of the funnel activities to acquire more customers. But companies with customer success engraved in their souls, believe in the ideology that the bottom of the funnel feeds the top of the funnel. This strategy results in a self-sustaining and recurring revenue model for the business.

An independent survey conducted by the Customer Service Managers and Professionals Journal has found that companies pay 6x times more to acquire new customers than to keep an existing one. In this pandemic economy, the costs for customer acquisition will be much higher than before as organizations must be very frivolous in their spending. The best step forward is to make sure the companies strive for excellence in their customer experience and deliver measurable value to them. A study conducted by Bain and Company titled “Prescription for Cutting Costs” talks about how increasing customer retention by 5% increases profits from 25%-95%. 

The path to a sustainable and high growth business is to adopt customer-centric strategies that yield more value and growth for its customers. Enhancing customer experience should be prime and proper governance must be in place to monitor and gauge strategies. Governance in the world of the customer experience must revolve around identifying and managing resources needed to drive sustained actions, establishing robust procedures to organize processes, and ensuring a framework for stellar delivery.

Scaling to ever-changing customer needs

A research body called Walker Information conducted an independent research on B2B companies focusing on key initiatives that drive customer experiences and future growth. The study included various customer experience leaders, senior executives, and influencers representing a diverse set of business models in the industry. They published the report titled “Customer 2020: A Progress Report” and the following are strategies that best meet the changing needs of customers in the B2B landscape.

AI Devops Automation Service Tools

Over 45% of the leaders highlighted the importance of developing a customer-centric culture that simplifies products and processes for the business. Now the question that we need to ask ourselves is, how do we as an organization scale up to these demands of the market? I strongly believe that each of us, in the different roles we play in the organization, has an impact.

The Executive Team can support more customer experience strategies, formulate success metrics, measure the impact of customer success initiatives, and ensure alignment with respect to the corporate strategy.

The Client Partners can ensure that they represent the voice of the customer, plot a feasible customer experience roadmap, be on point with customer intelligence data, and ensure transparency and communication with the teams and the customers. 

The cross-functional team managers and members can own and execute process improvements, personalize and customize customer journeys, and monitor key delivery metrics.

When all these members work in unison, the target goal of delivery excellence coupled with customer success is always achievable.

Going Above and Beyond

Organizations should aim for customers who can be retained for life. The retention depends upon how much a business is willing to go the extra mile to add measurable value to its customers. Business contracts should evolve into partnerships that collaborate on their competitive advantages that bring solutions to real-world business problems. 

As customer success champions, we should reevaluate the possibilities in which we can make a difference for our customers. By focusing on our core competencies and using the latest tools in the market, we can look for avenues that can bring effort savings, productivity enhancements, process improvements, workflow optimizations, and business transformations that change the way our customers do business. 

After all, We are GAVS. We aim to galvanize a sense of measurable success through our committed teams and innovative solutions. We should always stride towards delivery excellence and strive for customer success in everything we do.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Quantum Computing

Vignesh Ramamurthy

Vignesh Ramamurthy

In the MARVEL multiverse, Ant-Man has one of the coolest superpowers out there. He can shrink himself down as well as blow himself up to any size he desires! He was able to reduce to a subatomic size so that he could enter the Quantum Realm. Some fancy stuff indeed.

Likewise, there is Quantum computing. Quantum computers are more powerful than supercomputers and tech companies like Google, IBM, and Rigetti have them.

Google had achieved Quantum Supremacy with its Quantum computer ‘Sycamore’ in 2019. It claims to perform a calculation in 200 seconds which might take the world’s most powerful supercomputer 10,000 years. Sycamore is a 54-qubit computer. Such computers need to be kept under special conditions with temperature being close to absolute zero.

quantum computing

Quantum Physics

Quantum computing falls under a discipline called Quantum Physics. Quantum computing’s heart and soul resides in what we call as Qubits (Quantum bits) and Superposition. So, what are they?

Let’s take a simple example, imagine you have a coin and you spin it. One cannot know the outcome unless it falls flat on a surface. It can either be a head or a tail. However, while the coin is spinning you can say the coin’s state is both heads and tails at the same time (qubit). This state is called Superposition.

So, how do they work and what does it mean?

We know bits are a combination of 0s and 1s (negative or positive states). Qubits have both at the same time. These qubits, in the end, pass through something called “Grover Operator” which washes away all the possibilities, but one.

Hence, from an enormous set of combinations, a single positive outcome remains, just like how Doctor Strange did in the movie Infinity War. However, what is important is to understand how this technically works.

We shall see 2 explanations which I feel could give an accurate picture on the technical aspect of it.

In Quantum Mechanics, the following is as explained by Scott Aaronson, a Quantum scientist from the University of Texas, Austin.

Amplitude – an amplitude of a positive and a negative state. These could also be considered as an amplitude for being 0, and also an amplitude for being 1. The goal for an amplitude here is to make sure that amplitudes leading to wrong answers cancel each other out. Hence this way, amplitude with the right answer remains the only possible outcome.

Quantum computers function using a process called superconductivity. We have a chip the size of an ordinary computer chip. There are little coils of wire in the chip, nearly big enough to see with the naked eye. There are 2 different quantum states of current flowing through these coils, corresponding to 0 and 1, or the superpositions of them.

These coils interact with each other, nearby ones talk to each other and generate a state called an entangled state which is an essential state in Quantum computing. The way qubits interact are completely programmable, so we can send electrical signals to these qubits, and tweak them according to our requirements. This whole chip is placed in a refrigerator with a temperature close to absolute zero. This way superconductivity occurs which makes it to briefly behave as qubits.

Following is the explanation given according to ‘Kurzgesagt — In a Nutshell’, a YouTube channel.

We know a bit is either a 0 or 1. Now, 4 bits mean 0000 and so on. In a qubit, 4 classical bits can be in one of the 2^4 different configurations at once. That is 16 possible combinations out of which we can use just one. 4 qubits in position can be in all those 16 combinations at once.

This grows exponentially with each extra qubit. 20 qubits can hence store a million values in parallel. As seen, these entangled states interact with each other instantly. Hence while measuring one entangled qubit, we can directly deduce the property of its partners.

A normal logic gate gets a simple set of inputs and produces one definite output. A quantum gate manipulates an input of superpositions, rotates probabilities, and produces another set of superpositions as its output.

Hence a quantum computer sets up some qubits, applies quantum gates to entangle them, and manipulates probabilities. Now it finally measures the outcome, collapsing superpositions to an actual sequence of 0s and 1s. This is how we get the entire set of calculations performed at the same time.

What is a Grover Operator?

We now know that while taking one entangled qubit, it is possible to easily deduce properties for all the partners. Grover algorithm works because of these quantum particles being entangled. Since one entangled qubit is able to vouch for the partners, it iterates until it finds the solution with higher degrees of confidence.

What can they do?

As of now, quantum computing hasn’t been implemented in real-life situations just because the world right now doesn’t have such an infrastructure.

Assuming they are efficient and ready to be used. We can make use of it in the following ways: 1) Self-driving cars are picking up pace. Quantum computers can be used on these cars by calculating all possible outcomes on the road. Apart from sensors to reduce accidents, roads consist of traffic signals. A Quantum computer will be able to go through all the possibilities of how traffic signals

function, the time interval, traffic, everything, and feed these self-driving cars with the single best outcome accordingly. Hence, what would result is nothing but a seamless commute with no hassles whatsoever. It’ll be the future as we see in movies.

2) If AI is able to construct a circuit board after having tried everything in the design architecture, this could result in promising AI-related applications.

Disadvantages

RSA encryption is the one that underpins the entire internet. It could breach it and hackers might steal top confidential information related to Health, Defence, personal information, and other sensitive data. At the same time, it could be helpful to achieve the most secure encryption, by identifying the best one amongst every possible encryption. This can be made by finding out the most secure wall to break all the viruses that could infect the internet. If such security is made, it would take a completely new virus to break it. But the chances are very minuscule.

Quantum computing has its share of benefits. However, this would take years to be put to use. Infrastructure and the amount of investment to make is humongous. After all, it could only be used when there are very reliable real-time use cases. It needs to be tested for many things. There is no doubt that Quantum Computing will play a big role in the future. However, with more sophisticated technology, comes more complex problems. The world will take years to be prepared for it.

References:

About the Author –

Vignesh is part of the GAVel team at GAVS. He is deeply passionate about technology and is a movie buff.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.