Large Language Models: A Leap in the World of Language AI

In Google’s latest annual developer conference, Google I/O, CEO Sundar Pichai announced their latest breakthrough called “Language Model for Dialogue Applications” or LaMDA. LaMDA is a language AI technology that can chat about any topic. That’s something that even a normal chatbot can do, then what makes LaMDA special?

Modern conversational agents or chatbots follow a narrow pre-defined conversational path, while LaMDA can engage in a free-flowing open-ended conversation just like humans. Google plans to integrate this new technology with their search engine as well as other software like voice assistant, workplace, gmail, etc. so that people can retrieve any kind of information, in any format (text, visual or audio), from Google’s suite of products. LaMDA is an example of what is known as a Large Language Model (LLM).

Introduction and Capabilities

What is a language model (LM)? A language model is a statistical and probabilistic tool which determines the probability of a given sequence of words occurring in a sentence. Simply put, it is a tool which is trained to predict the next word in a sentence. It works like how a text message autocomplete works. Where weather models predict the 7-day forecast, language models try to find patterns in the human language, one of computer science’s most difficult puzzles as languages are ever-changing and adaptable.

A language model is called a large language model when it is trained on enormous amount of data. Some of the other examples of LLMs are Google’s BERT and OpenAI’s GPT-2 and GPT-3. GPT-3 is the largest language model known at the time with 175 billion parameters trained on 570 gigabytes of text. These models have capabilities ranging from writing a simple essay to generating complex computer codes – all with limited to no supervision.

Limitations and Impact on Society

As exciting as this technology may sound, it has some alarming shortcomings.

1. Biasness: Studies have shown that these models are embedded with racist, sexist, and discriminatory ideas. These models can also encourage people for genocide, self-harm, and child sexual abuse. Google is already using an LLM for its search engine which is rooted in biasness. Since Google is not only used as a primary knowledge base for general people but also provides an information infrastructure for various universities and institutions, such a biased result set can have very harmful consequences.

2. Environmental impact: LLMs also have an outsize impact on the environment as these emit shockingly high carbon dioxide – equivalent to nearly five times the lifetime emissions of an average car including manufacturing of the car.

3. Misinformation: Experts have also warned about the mass production of misinformation through these models as because of the model’s fluency, people can confuse into thinking that humans have produced the output. Some models have also excelled at writing convincing fake news articles.

4. Mishandling negative data: The world speaks different languages that are not prioritized by the Silicon Valley. These languages are unaccounted for in the mainstream language technologies and hence, these communities are affected the most. When a platform uses an LLM which is not capable of handling these languages to automate its content moderation, the model struggles to control the misinformation. During extraordinary situations, like a riot, the amount of unfavorable data coming in is huge, and this ends up creating a hostile digital environment. The problem does not end here. When the fake news, hate speech and all such negative text is not filtered, it is used as a training data for next generation of LLMs. These toxic linguistic patterns then parrot back on the internet.

Further Research for Better Models

Despite all these challenges, very little research is being done to understand how this technology can affect us or how better LLMs can be designed. In fact, the few big companies that have the required resources to train and maintain LLMs refuse or show no interest in investigating them. But it’s not just Google that is planning to use this technology. Facebook has developed its own LLMs for translation and content moderation while Microsoft has exclusively licensed GPT-3. Many startups have also started creating products and services based on these models.

While the big tech giants are trying to create private and mostly inaccessible models that cannot be used for research, a New York-based startup, called Hugging Face, is leading a research workshop to build an open-source LLM that will serve as a shared resource for the scientific community and can be used to learn more about the capabilities and limitations of these models. This one-year-long research (from May 2021 to May 2022) called the ‘Summer of Language Models 21’ (in short ‘BigScience’) has more than 500 researchers from around the world working together on a volunteer basis.

The collaborative is divided into multiple working groups, each investigating different aspects of model development. One of the groups will work on calculating the model’s environmental impact, while another will focus on responsible ways of sourcing the training data, free from toxic language. One working group is dedicated to the model’s multilingual character including minority language coverage. To start with, the team has selected eight language families which include English, Chinese, Arabic, Indic (including Hindi and Urdu), and Bantu (including Swahili).

Hopefully, the BigScience Project will help produce better tools and practices for building and deploying LLMs responsibly. The enthusiasm around these large language models cannot be curbed but it can surely be nudged in a direction that has lesser shortcomings. Soon enough, all our digital communications—be it emails, search results, or social media posts —will be filtered using LLMs. These large language models are the next frontier for artificial intelligence.

About the Author –

Priyanka Pandey

Priyanka is a software engineer at GAVS with a passion for content writing. She is a feminist and is vocal about equality and inclusivity. She believes in the cycle of learning, unlearning and relearning. She likes to spend her free time baking, writing and reading articles especially about new technologies and social issues.

Exceptional Customer Experience at the Heart of Great Products

The Customer Experience Strategy

Apple Inc stands out as one of the most innovative and customer-focused companies in the world. Their brand positioning and value from its products has catapulted them to one of the most valuable brands in the market today. The visionary responsible for Apple’s monumental growth is none other than its founder Steve Jobs. The fundamental principle he followed in all his strategies was to keep his customers at the center and simplify their lives with Apple products.

Best AI Auto Discovery Tools

All of Apple’s products had a customer-first approach and they invested heavily on understanding customers and their pain points. The products aimed for the best customer experience across different domains and ensured that every user craved to use an Apple product. This strategy transformed Apple into a religion from a technology company. When we look at the way we currently solve customer problems, we tend to start with the technological feasibility and then work towards solving the problem at hand. If the solutions are not feasible from a technological standpoint, certain customer needs are compromised.

Taking customer needs as a primary lever over technology is a very challenging move. Here organizations must be ready to adapt and experiment with little to no historical data to solve customer problems that are in front of them. This would require them to challenge the traditional ways in which they look at technology and the approach towards customer-centricity.

Customer Experience Foundations

The foundation of all customer experiences focuses on the cumulation of the value provided to the customer during their interaction with the brand. At all stages of the customer journey, the customer experience encompasses all the ways a customer interacts with the brand.

When we look at it from a product standpoint, the total product experience is the primary value offered to the customers. Here we have to take into account how the customer experiences the product, how the product delivers a lasting impression, and helps build a connection with the brand.

For businesses to succeed, a positive customer experience is crucial. A loyal customer can boost your revenue to eventually promote and advocate for you. This brings in more business from their network.

Best AIOps Platforms Software

Variables Influencing Customer Experience

Nowadays, no company can afford to provide a substandard customer experience regardless of the industry, their experience in the market, or their reputation. The way organizations deal with customers influence retention rates, brand value, and finally the financial performance. Given these facts, there are a couple of variables that are responsible for the overall customer experience.

1. Customer-Centric Culture

Businesses that treat their customers as king, the most prized asset for the organization, have reported higher returns than compared to their counterparts who do not emphasize the stance.

A customer-centric culture revolves around solving customer problems and adding real value at the end of the day. The organization’s leaders must take the effort to ensure that the teams focus on providing consistent customer experience through their marketing, sales cycle, and during the customer service phase.

A customer-centric work culture brings in the values of being there for the customer, solving their critical needs, and supporting them through the resolution process.

2. Product Value

A product that doesn’t solve customer needs and problems does not add value at all. Products that target all the pain points of the customer’s needs and expectations require lesser post-sales support than a product that does not.

Sustained success lies in a well-built product. Regardless of the brand’s industry and specialization, the product is what defines the brand. Even though marketing, sales, and customer service are required for a business to thrive, the brand will fail if the product is not effective.

3. Customer Touch Points

A connected customer is vital to all organizations. Reliability of brand strongly revolves around how easily the customer can approach representatives to either know about the product or solve some issues post the sale. Hence, providing customers with effective touchpoints from multiple channels is important to keep customers engaged and be readily available to solve any issues they are facing. These channels and touchpoints can include email, phone, text, instant messaging, social media, website, or even a third-party review site. All focus on getting connected to the customer and being there to address their needs.

4. Technology

Technologies have enabled brands to connect with customers deeper than ever before. Companies now use technology to prevent and avoid losses and create solutions to their shortcoming in their customer experience strategies. Personal analytics instruments help organizations get real-time feedback and analyze the customer pulse.

Technology enables brands to modernize and structure their products effectively and maximize their efficiency. It helps reduce or eliminate labor-intensive customer requests and speeds up the completion time of the processes. This empowers brands to have additional functionalities and cut costs at the same time.

5. People

A unified team of individuals makes a successful customer experience possible. Suppliers, marketers, salesmen, customer service agents, and many others play an essential role in delivering the best-in-class customer experience. For this to happen, each of these individuals must be well-versed with the organizational strategy and have the morale to implement an impactful customer experience that adds value.

Value-Driven Customer Experiences

When customers show their interest towards a brand’s product, they invest their time and money in the entire process. From the product, customers aim to gain business benefits that drive value for the brand. Out of all the variables that influence the customer experience, the product pricing and the contract value has an indirect influence on the type of relationship the customers will have with the brand.

A study conducted by Christopher Meyer and Andre Schwager on “Understanding Customer Experience” published in the Harvard Business Review, talks about classifying customers based on the billed revenues and their satisfaction scores.

Best Cyber Security Services Companies

Here we can see that 4 different types of customers are formed out of the matrix. Dangling, Growth, At-Risk and Model Customers. An ideal business would always strive to maximize the number of model customers that churn out high volume of billed revenue and are satisfied with the service that they receive. 

The challenge here is to create a state where customers are comfortable with investing in a product that can guarantee business benefits out of its utilization. To achieve this state, brands must work towards understanding the value matrix that the product offers and tie its output with a measurable value realization for their customers. If the business benefits cannot be measured, customers tend to not invest heavily in the product and remain in the growth segment of the matrix.

At the end of the day, it’s the customer experience that’s at the heart of the product, that makes them realize the product’s true value and impact on their own business.

About the Author –

Ashish Joseph

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management.

He runs two independent series called BizPective & The Inside World, focusing on breaking down contemporary business trends and Growth strategies for independent artists on his website www.ashishjoseph.biz

Outside work, he is very passionate about basketball, music, and food.

Customizing OOTB IT Network Security Software Products

Sundaramoorthy S

As global IT is rapidly being digitalized, the network security requirements of major businesses are offered as Out of The Box (OOTB) IT security products by IT OEMs (Information Technology Original Equipment Manufacturers).

The products offered by OEMs adhere to global standards like ISO/IEC 2700, NIST, GDPR, CCPA, and PDPB, which leads to businesses buying licenses for the end products with the intention of saving time and money. However, while integrating, deploying, and maintaining the product solution, the intention of owning the product is violated.  

This article focuses on the customizations of the OOTB products that should be avoided, and steps for tuning the customization of the requirements in the licensed products.

Customization is desirable when it lies within the OOTB product’s radar. Moving beyond the limits leads to multiple operational challenges.

Customizations that are narrower in scope end up being under-utilized. There are certain customizations that can very well be done without. It is ideal to conduct an analysis to validate whether the time and money invested for such customizations will give proportionate benefits/returns.

Product OEMs should be consulted on matters of future releases and implementations before taking such decisions. Choosing the right implementation partner is equally important. Failing to do so may result in issues in production systems, in terms of Audit, Governance, Security, and Operations. Realizing the flaw in later stages costs businesses heavily. Extensive testing must be conducted to ensure the end-to-end capabilities of the OOTB product are not violated.

Listed below are few observations based on my discussions with executives who have faced such issues in ongoing and completed implementations.

Customizations to Avoid

  • OOTB products are customized by overwriting thousands of lines of code. It makes the product tightly coupled to the network and makes the future upgrades and migration of the product complex.
  • Disregarding the recommendations of product architects & SMEs and making customizations to the existing capability of the products to meet the isolated requirements of a business leads to further hidden issues in the products. Finally, what the business demands is to customize, which violates the intent of the OOTB product.
  • Random customizations make the products compatible with the existing enterprise architecture which makes the network vulnerable.
    Below are some challenges:
    • OOTB designed products are unable to consume the business data as it is in some cases
    • Some business users are not willing to migrate to new systems, or unable to educate the users to utilize the new systems.
  • OOTB APIs are not utilized in places where it is required.

Cons of Customizing

  • OEMs provide support for OOTB features only and not for customized ones.
  • The impact of customizations on the product’s performance, optimization, and security is not always clear.
  • Audit and Governance are not manageable if the customizations are not end-to-end.
  • The above issues may lead to a lower return on investment for the customizations

Steps to Avoid Major Customization

For New implementations

  • The Road Map and strategy should be derived by doing a detailed analysis of the current and future state while selecting the product solution.
  • PoCs for requirements of the future state should be done with multiple products which offer similar services in the market to select the right one.
  • Future requirements vs product compliance matrix should be validated.
  • Gap analysis between the current state and future state should be executed through discussions with product owners and key stakeholders in the business.
  • Implementation partners could be engaged in such activities which could refine the analysis and offer their expertise on working with multiple similar products in the market so that the outcome (product selected) is best in terms of cost and techno-functional requirements.

For existing implementations where the product solution is already deployed

  • OOTB product features should be utilized efficiently by vendors, partners, and service providers.
  • To utilize the OOTB product, massaging the existing dataset or minimal restructuring post risk analysis is acceptable. This exercise should be done before onboarding the product solution.
  • For any new requirement which is not OOTB, rather than customizing the product solution independently as an end-user (business entity), a collaborative approach with implementation partners and OEMs’ professional services (minimal) should be taken. This can help address the complexity of requirements without any major roadblocks in the implementation in terms of security and performance of the product solution already deployed in the network. In this approach, support from the product team is available too, which is a great plus.

Role of OEMs

OEMs should take the necessary efforts to understand the needs of the customers and deliver relevant products. This will help in ensuring a positive client experience.

Below are few things the OEMs should consider:

  1. OEMs should have periodic discussions with clients, service providers, and partners, and collect inputs to upgrade their product and remain competitive.
  2. Client-specific local customizations which could be utilized by global clients should be encouraged and implemented.
  3. OEMs should implement the latest technologies and trends in OOTB products sooner than later.
  4. OEMs could use the same technical terminologies across the products which offer similar services, as of now individual products use their own which is not a client and user-friendly.

Since security is the top priority for all, above discussed improvisations, tips and pointers should be followed by all the IT OEMs in the market who produce IT network security products.

Customizations in IT security products are not avoidable. But it should be minimal and configurable based on the business-specific requirements and not major enhancements.

OOTB vs Customization Ratio

Enterprise IT Support Services USA

About the Author –

Sundar has more than 13 years of experience in IT, IT security, IDAM, PAM and MDM project and products. He is interested in developing innovative mobile applications which saves time and money. He is also a travel enthusiast.

Introduction to Shift Left Testing

Abdul Riyaz

Never stop until the very end.

The above statement encapsulates the essence of Shift Left Testing.

Quality Assurance should keep up the momentum of testing during the end-to-end flow. This will ensure Quicker Delivery, Quality Product, and Increased Revenue with higher Profitability. This will help transform the software development process. Let me elucidate how it helps.

Traditional Testing vs Shift Left Testing

For several decades, Software Development followed the Waterfall Model. In this method, each phase depends on the deliverables of the previous phase. But over time, the Agile method provided a much better delivery pattern and reduced the delivery timelines for projects. In this Software Development model, testing is a continuous process that starts at the beginning of a project and reduces the timelines. If we follow the traditional way of testing after development, it eventually results in a longer timeline than we imagined.

Hence, it is important to start the testing process parallel to the development cycle by using techniques such as ‘Business-Driven Development’ to make it more effective and reduce the timeline of delivery. To ensure Shift Left Testing is intact, AUT (Application Under Test) should be tested in an automated way. There are many proven Automation Testing software available in the current world of Information Technology which help better address this purpose.

AI Devops Automation Service Tools
AIOps Artificial Intelligence for IT Operations

End-to-End Testing Applied over Shifting Left!

Software Testing can be predominantly classified in 3 categories – Unit, Integration and End-to-End Testing. Not all testing correspondingly shifts left from Unit test to System test. But this approach is revolutionized by Shift Left Testing. Unit Testing is straightforward to test basic units of code, End-to-End Testing is based on the customer / user for the final product. But if we bring the End-to-End testing to the left, that will result in better visibility of the code and its impact on the entire product during the development cycle itself.

The best way we could leverage ML (Machine Learning) and achieve a Shift-Left towards design and development with testing is indicated by continuous testing, visual testing, API coverage, scalable tests and extendable coverage, predictive analytics, and code-less automation.

AIOps Digital Transformation Solutions

First Time Right & Quality on Time Shift Left Testing not only reduces the timeline of deliveries, but it also ensures the last minute defects are ruled out and we get to identify the software flaws and conditions during the development cycle and fix them, which eventually results in “First Time Right”. The chance of leaking a defect is very less and the time spent by development and testing teams towards fixing and retesting the software product is also reduced, thereby increasing the productivity for “Quality on Time” aspects.

I would like to refer to a research finding by the Ponemon Institute. It found that if vulnerabilities are detected in the early development process, they may cost around $80 on average. But the same vulnerabilities may cost around $7,600 to fix if detected after they have moved into production.

Best AI Auto Discovery Tools

The Shift left approach emphasizes the need for developers to concentrate on quality from the early stages of a software build, rather than waiting for errors and bugs to be found late in the SDLC.

Machine Learning vs AI vs Shift Left Testing There are opportunities to leverage ML methods to optimize continuous integration of an application under test (AUT) which begins almost instantaneously. Making machine learning work is a comparatively smaller feat but feeding the right data and right algorithm into it is a tough task. In our evolving AI world, gathering data from testing is straightforward. Eventually making practical use of all this data within a reasonable time is what remains intangible. A specific instance is the ability to recognize patterns formed within test automation cycles. Why is this important? Well, patterns are present in the way design specifications change and, in the methods, programmers use to implement those specifications. Patterns follow in the results of load testing, performance testing, and functional testing.

ML algorithms are great at pattern recognition. But to make pattern recognition possible, human developers must determine which features in the data might be used to express valuable patterns. Collecting and wrangling the data into a solid form and knowing which of the many ML algorithms to inject data into, is very critical to success.

Many organizations are striving towards inducting shift left in their development process; testing and automation are no longer just QA activities. This certainly indicates that the terms of dedicated developers or testers are fading away. Change is eventually challenging but there are few aspects that every team can work towards to prepare to make this shift very effective. It might include training developers to become responsible for testing, code review quality checks, making testers aware of code, start using the same tools, and always beginning with testability in mind.

Shifting left gives a greater ability to automate testing. Test automation provides some critical benefits;

  • Fewer human errors
  • Improvised test coverage (running multiple tests at same time)
  • Involvement and innovative focus of QA engineers apart from day to day activities
  • Lesser or no production defects.
  • Seamless product development and testing model

Introducing and practicing Shift Left Testing will improve the Efficiency, Effectiveness and the Coverage of testing scope in the software product which helps in delivery and productivity.

References

About the Author –

Riyaz heads the QA Function for all the IP Projects in GAVS. He has vast experience in managing teams across different domains such as Telecom, Banking, Insurance, Retail, Enterprise, Healthcare etc.

Outside of his professional role, Riyaz enjoys playing cricket and is interested in traveling and exploring things. He is passionate about fitness and bodybuilding and is fascinated by technology.

Reimagining ITSM Metrics

Rama Periasamy

Rama Vani Periasamy

In an IT Organization, what is measured as success.? Predominantly it inclines towards the Key Performance Indicators, internally focused metrics, SLAs and other numbers. Why don’t we shift our performance reporting towards ‘value’ delivered to our customers along with the contractually agreed service levels? Because the success of any IT operation comes from defining what it can do to deliver value and publishing what value has been delivered, is the best way to celebrate that success.

It’s been a concern that people in service management overlook value as trivial and they often don’t deliver any real information about the work they do . In other words, the value they have created goes unreported and the focus lies only on the SLA driven metrics & contractual obligations. It could be because they are more comfortable with the conventional way of demonstrating the SLA targets achieved. And this eventually prevents a business partner from playing a more strategic role.

“Watermelon reporting” is a phrase used in reporting a service provider’s performance. The SLA reports depict that the service provider has adhered to the agreed service levels and met all contractual service level targets. It looks ’green’ on the outside, just like a watermelon. However, the level of service perceived by the service consumer does not reflect the ’green’ status reported (it might actually be ’red’, like the inside of a watermelon). And the service provider continues to report on metrics that do not address the pain points.  

This misses the whole point about understanding what success really means to a consumer. We tend to overlook valuable data and the one that shows how an organization as a service provider is delivering value and helping the customer achieve his/her business goals.

The challenge here is that often consumers have underdeveloped, ambiguous and conflicting ideas about what they want and need. It is therefore imperative to discover the users’ unarticulated needs and translate them into requirements.

For a service provider, a meaningful way of reporting success would be focused on outcomes rather than outputs which is very much in tandem with ITIL4. Now this creates a demand for better reporting, analysis of delivery, performance, customer success and value created.

Consider a health care provider, the reduced time spent in retrieving a patient history during a surgery can be a key business metric and the number of incidents created, number of successful changes may be secondary. As a service provider, understanding how their services support such business metrics would add meaning to the service delivered and enable value co-creation.

It is vital that a strong communication avenue is established between the customer and the service provider teams to understand the context of the customer’s business. To a large extent, this helps the service provider teams to prioritize what they do based on what is critical to the success of the customer/service consumer. More importantly, this enables the provider become a true partner to their customers.

Taking service desk as an example, the service desk engineers fixes a printer or a laptop, resets passwords. These activities may not provide business value, but it helps to mitigate any loss or disruption to a service consumer’s business activities. The other principal part of service desk activity is to respond to service requests. This is very much an area where business value delivered to customers can be measured using ITSM.

Easier said, but how and what business value is to be reported? Here are some examples that are good enough to get started.

1. Productivity
Assuming that every time a laptop problem is fixed with the SLA, it allows the customer to get back to work and be productive. Value can be measured here by the cost reduction – considering the employee cost per hour and the time spent by the IT team to fix the laptop.

How long does it take for the service provider to provide what a new employee needs to be productive? This measure of how long it takes to get people set up with the required resources and whether this lead-time matches the level of agility the business requires equates to business value. 

2. Continual Service Improvement (CSI)

Measuring value becomes meaningless when there is no CSI. So, measuring the cost of fixing an incident plus the loss of productivity and identifying and providing solutions on what needs to be done to reduce those costs or avoid incidents is where CSI comes into play.

Here are some key takeaways:

  • Make reporting meaningful by demonstrating the value delivered and co-created, uplifting your operations to a more strategic level.
  • Speak to your customers to capture their requirements in terms of value and enable value co-creation as partners.
  • Your report may wind up in the trash, not because you have reported wrong metrics, but it may just be reporting of data that is of little importance to your audience.   

Reporting value may seem challenging, and it really is. But that’s not the real problem. Keep reporting your SLA and metrics but add more insights to it. Keep an eye on your outcomes and prevent your IT service operations from turning into a watermelon!

References –

About the Author –

Rama is a part of the Quality Assurance group, passionate about ITSM. She loves reading and traveling.
To break the monotony of life and to share her interest in books and travel, she blogs and curates at www. kindleandkompass.com

Privacy Laws – Friends not Foes!

Barath Avinash

“Privacy means people know what they’re signing up for, in plain language, and repeatedly. I believe people are smart. Some people want to share more than other people do. Ask them.” – Steve Jobs

Cyber Security and Compliance Services

However futile a piece of data is today; it might be of high importance tomorrow. Misuse of personal data might lead to devastating consequences for the data owner and possibly the data controller.

Why is Data Privacy important?

For us to understand the importance of data privacy, the consequences of not implementing privacy protection must be understood. A very relevant example to understand this better is the Facebook-Cambridge Analytica scandal which potentially led to canvassing millions of Facebook users for an election without users’ explicit consent. 

To answer one long standing argument against privacy is that “I do not have anything to hide and so I do not care about privacy”. It is true that privacy can provide secrecy, but beyond that, privacy also provides autonomy and therefore freedom, which is more important than secrecy.

How can businesses benefit by being data privacy compliant?

Businesses can have multifold benefits for complying, implementing, and enforcing privacy practice within the organization. Once an organization is compliant with general data privacy principles, they also become mostly compliant with healthcare data protection laws, security regulations and standards. This reduces the effort an organization has to go through to be compliant on several other security and privacy regulations or standards. 

How can businesses use privacy to leverage competition?

With privacy being one of the highly sought out domain after the enactment of GDPR regulation for the EU followed by CCPA for USA and several other data protection laws around the world, businesses can leverage these for competitive advantage rather than looking at privacy regulations as a hurdle for their business and just as a mandatory compliance requirement. This can be achieved by being proactive and actively working to implement and enforce privacy practices within the organization. Establish regulatory compliance with the customers by means of asking for consent, being transparent with the data in use and by providing awareness. Educating people by providing data user centric awareness as compared to providing awareness for the sake of compliance is a good practice and thus will result in increasing the reputation of the business.

Why is privacy by design crucial?

Business should also focus on operations where implementing ‘privacy by design’ principle might build a product which would be compliant to privacy regulations as well as security regulations and standards through which a solidly built future proof product could be delivered.

The work doesn’t stop with enforcement and implementation, continual practice is necessary to maintain consistency and establish ongoing trust with customers.

With increasing statutory privacy regulations and laws in developed countries, several other countries have been either planning to enact privacy laws or have already started implementing them. This would be the right time for businesses located in developing countries to start looking into privacy practice so that it would be effortless when a privacy law is enacted and put into enforcement.

What’s wrong with Privacy Laws?

Privacy laws that are in practice come with their fair share of problems since they are relatively new.

  • Consent fatigue is a major issue with GDPR since it requires data owners to consent to processing or use of their data constantly, which tires the data owner and results in them ignoring privacy and consent notices when sent by the data processor or data collector.
  • Another common issue is sending multiple data requests by ill-motivated malicious users or automated computer bots to the data collector in order to bombard them with requests for data owner’s data which is available with the controller, this is a loophole under the ‘right to access’ of GDPR which is being exploited in some cases. This will burden the data protection officer to cause delay in sending requested data to the customer thus inviting legal consequences.
  • Misuse of privacy limitation guidelines are also a major problem in the GDPR space, time and again data collectors provide data processing purpose notice to data owners and subsequently use the same data for a different purpose without receiving proper consent from data owner thus often violating the law.

What the future holds for privacy?

As new privacy laws are in works, better and comprehensive laws will be brought in, learning from inconveniences of existing laws. Amendments for existing laws will also follow to enhance the privacy culture.

Privacy landscape is moving towards better and responsible use of user data, as the concept of privacy and its implementation matures with time, it is high time businesses start implementing privacy strategies primarily for business growth rather than merely for regulatory compliance. That is the goal every mature organization should aim towards and work on.

Privacy is firstly a human right; therefore, privacy laws are enacted on the basis of rights, because laws can be challenged and modified under court of justice, but rights cannot be.

References:

https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.htm

https://iapp.org/news/a/fake-dsars-theyre-a-thing/

About the Author –

Barath Avinash is part of GAVS’ security practice risk management team. He has a master’s degree in cyber forensics and information security. He is an information security and privacy enthusiast and his skillet include governance, compliance and cyber risk management.

Blockchain-based Platform for COVID-19 Vaccine Traceability

Srinivasan Sundararajan

Over the last few weeks, several pharma companies across world have announced vaccines for COVID. The respective governments are going through rigorous testing and approval processes to roll out vaccines soon.

The massive exercise of administering vaccines to billions of people across different geographies poses various challenges. Add to this the fact that different vaccines have strict conditions for storage and handling. Also, the entire history of traceability of the vaccine should be available.

While tracking the supply chain of any commodity in general and pharmaceutical products, in particular, is always complex, the COVID-19 vaccine poses tougher challenges. The following are the current temperature sensitivity needs of various vaccine manufacturers.  

best dcaas providers in usa

The information is from publicly available sites and should not be treated as a guideline for vaccine storage.

Blockchain to the Rescue

Even before the pandemic, Blockchain with its built-in ability to provide transparency across stakeholders has been a major platform for pharmaceutical traceability. The criticality for providing COVID-19 vaccine traceability has only strengthened the cause of utilizing blockchain for the supply chain in the pharma industry.

Blockchain networks with its base attributes like de-centralized ownership of data, single version of truth across stakeholders, the ability to ensure the data ownership based on cryptography-based security, and the ability to implement and manage business rules, will be a default platform handling the traceability of COVID-19 vaccines across multiple stakeholders.

Going beyond, Blockchain will also play a major role in the Identity and Credentialing of healthcare professionals involved, as well as the Consent Management of the patients who will be administered the vaccine. With futuristic technology needs like Health Passport, Digital Twin of a Person, Blockchain goes a long way in solving the current challenges in healthcare beyond streamlining the supply chain.

GAVS Blockchain Based Prototype for COVID-19 vaccine Traceability

GAVS has created a prototype of Blockchain-based network platform for vaccine traceability to demonstrate its usability. This solution has a much larger scope for extending to various healthcare use cases.

The below is the high-level process flow of the COVID-19 vaccine trial and various stakeholders involved.

digital transformation services and solutions

Image Source – www.counterpointresearch.com

Based on the use case and the stakeholders involved. GAVS prototype first creates a consortium using a private blockchain network. For the sake of simplicity, Distributors are not mentioned, but in real life, every stakeholder will be present. Individuals who receive the vaccine from hospitals are not part of the Network at this stage. But in future, their consent can be tracked using Blockchain.

Using Azure Blockchain Service, we can create private consortium blockchain networks where each blockchain network can be limited to specific participants in the network. Only participants in the private consortium blockchain network can view and interact with the blockchain. This ensures that sensitive information about vaccines are not exposed or misused.

data center consolidation initiative services

The following smart contracts are created as part of the solution with assigned ownership to the individual stake holders.

Blockchain solution and services

A glimpse of few of the smart contracts are listed for illustration purposes.

pragma solidity ^0.5.3;

pragma experimental ABIEncoderV2; 

contract Batch {

    string  public BatchId;

    string  public ProductName;

    string  public ProductType;

    string  public TempratureMaintained;

    string  public Efficacy;

    string  public Cost;

    address public CurrentOwner;

    address public ManufacturerAddr;

    address public AirLogAddr;

    address public LandLogAddr;

    address public HospAdminAddr;

    address public HospStaffAddr;

    string[] public AirTemp = new string[](10);

    string[] public LandTemp = new string[](10);

    string[] public HospTemp = new string[](20);

    string  public receiptNoteaddr;

    constructor  (string memory _batchId, string memory _productName,  string memory _productType,  string memory _TemperatureMaintained,  string memory _Efficacy,  string memory _Cost) public {

        ManufacturerAddr = msg.sender;

        BatchId = _batchId;

        ProductName = _productName ;

        ProductType = _productType;

        TemperatureMaintained = _TemperatureMaintained;

        Efficacy = _Efficacy;

        Cost = _Cost;

    }   

    modifier onlyOwner()    {

        require (msg.sender == CurrentOwner, “Only Current Owner Can Initiate This Action”);

        _;

    }      

    function updateOwner(address _addr) onlyOwner public{

       CurrentOwner = _addr;

    }        

    function retrieveBatchDetails() view  public returns (string memory, string memory, string memory, string memory, string memory, address, address, address, address, address, string[] memory, string[] memory, string[] memory, string memory) {

        return (BatchId,ProductName,TemperatureMaintained,Efficacy,Cost,ManufacturerAddr,AirLogAddr,LandLogAddr,HospAdminAddr,HospStaffAddr,AirTemp,LandTemp,HospTemp,receiptNoteaddr);  

    }

}  

The front end (Dapp) through which the traceability of the COVID-19 vaccine can be monitored is also developed and the following screenshots show certain important data flows.

Vaccine Traceability System Login Screen

best dcaas providers in usa

Traceability view for a particular batch of Vaccine

digital transformation services and solutions

Details of vaccinated patients entered by hospital

data center consolidation initiative services

Advantages of The Solution

  • With every vaccine monitored over the blockchain, each link along the chain could keep track of the entire process, and health departments could monitor the chain as a whole and intervene, if required, to ensure proper functioning.
  • Manufacturers could track whether shipments are delivered on time to their destinations.
  • Hospitals and clinics could better manage their stocks, mitigating supply and demand constraints. Furthermore, they would get guarantees concerning vaccine authenticity and proper storage conditions.
  • Individuals would have an identical guarantee for the specific vaccine they receive.
  • Overall this technology-driven approach will help to save the lives in this critical juncture.

 Extensibility to Future Needs

Gartner’s latest hypercycle for emerging technologies highlight several new technologies and notably Health Passport. As the travelers used to travel with a physical passport pandemic has created the need for a health passport, which is more like a digital health record that passengers can carry on their phones. Ideally, it should show the passengers past exposure to diseases and the vaccine records. By properly deploying health passports, several industries can revive themselves by allowing free-flowing movement of passengers across the globe.

The above blockchain solution though meant for COVID-19 traceability can potentially extended to a health passport once the patient also becomes part of it by a wallet based authentication mechanism, at GAVS we plan to explore the health passports on Blockchain in the coming months.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Healthcare Data Management Solutions for the post-pandemic Healthcare era, using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at Patient data sharing within Hospitals as well as across Hospitals (Healthcare Interoperability) while bringing more trust and transparency into the healthcare process using patient consent management, credentialing, and zero knowledge proofs.

Tuning Agile Delivery for Customer and Employee Success

Ashish Joseph

What is Agile?

Agile has been very popular in the software development industry for empowering delivery to be more efficient and effective. It is a common misconception for Agile to be thought of as a framework or a process that follows a methodology for software development. But Agile is a set of values and principles. It is a collection of beliefs that teams can use for decision making and optimizing project deliveries. It is customer-centric and flexible, helping teams adapt accordingly. It doesn’t make the decision for the team. Instead, it gives a foundation for teams to make decisions that can result in a stellar execution of the project.

According to the Agile Manifesto, teams can deliver better by prioritizing the following over the other.

  • Individuals and Interactions over process and tools
  • Working Model over Comprehensive Documentation
  • Customer Collaboration over Contract Negotiation
  • Responding to Changes over following a Plan

With respect to Software Development, Agile is an iterative approach to project management which help teams deliver results with measurable customer value. The approach is designed to be faster and ensures the quality of delivery that is aided with periodic customer feedbacks. Agile aims to break down the requirement into smaller portions, results of which can be continuously evaluated with a natural mechanism to respond to changes quickly.

AIOps Artificial Intelligence for IT Operations

Why Agile?

The world is changing, and businesses must be ready to adapt to how the market demands change over time. When we look at the Fortune 500 companies from 1955, 88% of them perished. Nearly half of the S&P 500 companies is forecasted to be replaced every ten years. The only way for organizations to survive is to innovate continuously and understand the pulse of the market every step of the way. An innovative mindset helps organizations react to changes and discover new opportunities the market can offer them from time to time.

Agile helps organizations execute projects in an everchanging environment. The approach helps break down modules for continuous customer evaluation and implement changes swiftly.

The traditional approach to software project management uses the waterfall model, where we Plan, Build, Test, Review and Deploy. But this existing approach would result in iterations in the plan phase whenever there are deviations in the requirement with respect to the market. When teams choose agile, it helps them respond to changes in the marketplace and implement customer feedback without going off the plan. Agile plans are designed in such a manner to include continuous feedback and its corresponding changes. Organizations should imbibe the ability to adapt and respond fast to new and changing market demands. This foundation is imperative for modern software development and delivery.

Is Agile a right fit for my Customer? People who advocate Agile development claim that Agile projects succeed more often than waterfall delivery models. But this claim has not been validated by statistics. A paper titled “How Agile your Project should be?” by Dr. Kevin Thompson from Kevin Thompson Consulting, provides a perspective from a mathematical point of view for both Agile and Waterfall project management. Here both approaches were followed for the same requirements and were also affected by the same unanticipated variables. The paper focused on the statistical evidence to support the validity of both the options to evaluate the fit.

While assessing the right approach, the following questions need to be asked

  • Are the customer requirements for the project complete, clear and stable?
  • Can the project effort estimation be easily predicted?
  • Has a project with similar requirements been executed before?

If the answer to all the above questions are Yes, then Agile is not the approach to be followed.

The Agile approach provides a better return on investment and risk reduction when there is high uncertainty of different variables in the project. When the uncertainty is low, waterfall projects tend to be more cost effective than agile projects.

Optimizing Agile Customer Centricity

Customer centricity should be the foundation of all project deliveries. This help businesses align themselves to the customer’s mission and vision with respect to the project at hand. While we consider an Agile approach to a project in a dynamic and changing environment, the following are some principles that can help organizations align themselves better with their customer goals.

  • Prioritizing Customer Satisfaction through timely and continuous delivery of requirements.
  • Openness to changing requirements, regardless of the development phase, to enable customers to harness the change for their competitive advantage in the market.
  • Frequent delivery of modules with a preference towards shorter timelines.
  • Continuous collaboration between management and developers to understand the functional and non-functional requirements better.
  • Measuring progress through the number of working modules delivered.
  • Improving velocity and agility in delivery by concentrating on technical excellence and good design.
  • Periodic retrospection at the end of each sprint to improve delivery effectiveness and efficiency.
  • Trusting and supporting motivated individuals to lead projects on their own and allowing them to experiment.

Since Agile is a collection of principles and values, its real utility lies in giving teams a common foundation to make good decisions with actionable intelligence to deliver measurable value to their customers.

Agile Empowered Employee Success

A truly Agile team makes their decisions based on Agile values and principles. The values and principles have enough flexibility to allow teams to develop software in the ways that work best for their market situation while providing enough direction to help them to continually move towards their full potential. The team and employee empowerment through these values and principles aid in the overall performance.

Agile not only improves the team but also the environment around which it is established by helping employees to be compliant with respect to audit and governance.  It reduces the overall project cost for dynamic requirements and focuses on technical excellence along with an optimized process for its delivery. The 14th Annual State of Agile Report 2020 published by StateofAgile.com surveyed 40,000 Agile executives to get insights into the application of Agile across different areas of enterprises. The report surveyed different Agile techniques that contributed most towards the employee success of the organization. The following are some of the most preferred Agile techniques that helped enhance the employee and team performances.

Best AI Auto Discovery Tools

All the above Agile techniques help teams and individuals to introspect their actions and understand areas of improvement in real time with periodic qualitative and quantitative feedback. Each deliverable from multiple cross functional teams can be monitored, tracked and assessed under a single roof. All these techniques collectively bring together an enhanced form of delivery and empower each team to realize their full potential.
Above all, Agile techniques help teams to feel the pulse of the customer every step of the way. The openness to change regardless of the phase, helps them to map all the requirements leading to an overall customer satisfaction coupled with employee success.

Top 5 Agile Approaches

Best AIOps Platforms Software

A Truly Agile Organization

Majority of the Agile approach has been concentrated towards development, IT, and Operations. However, organizations should strive towards effective alignment and coordination across all departments. Organizations today are aiming for greater expansion of agility into areas beyond building, deploying, and maintaining software. At the end of the day, Agile is not about the framework. It is all about the Agile values and principles the organizations believe in for achieving their mission and vision in the long run.

About the Author –

Ashish Joseph is a Lead Consultant at GAVS working for a healthcare client in the Product Management space. His areas of expertise lie in branding and outbound product management. He runs a series called #BizPective on LinkedIn and Instagram focusing on contemporary business trends from a different perspective. Outside work, he is very passionate about basketball, music, and food.

Patient 360 & Journey Mapping using Graph Technology

Srinivasan Sundararajan

360 Degree View of Patient

With rising demands for quality and cost-effective patient care, healthcare providers are focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. In other words, data-driven healthcare is augmenting human intelligence.

360 Degree View of Patient, as it is called, plays a major role in delivering the required information to the providers. It is a unified view of all the available information about a patient. It could include but is not limited to the following information:

  • Appointments made by the patients
  • Interaction with different doctors
  • Medications prescribed by the doctors
  • Patient’s relationship to other patients within the eco-systems specially to identify the family history related risks
  • Patient’s admission to hospitals or other healthcare facilities
  • Discharge and ongoing care
  • Patient personal wellness activities
  • Patient billing and insurance information
  • Linkages to the same patient in multiple disparate databases within the same hospital
  • Information about a patient’s involvement in various seminars, medical-related conferences, and other events

Limitations of Current Methods

As evident in most hospitals, these information are usually scattered across multiple data sources/databases. Hospitals typically create a data warehouse by consolidating information from multiple resources and try to create a unified database. However, this approach is done using relational databases and the relational databases rely on joining tables across entities to arrive at a complete picture. The RDBMS is not meant to handle relationships which extend to multiple hops and require drilling down to many levels.

Role of Graph Technology & Graph Databases

A graph database is a collection of nodes (or entities typically) and edges (or relationships). A node represents an entity (for example, a person or an organization) and an edge represents a relationship between the two nodes that it connects (for example, friends). Both nodes and edges may have properties associated with them.

While there are multiple graph databases in the market today like, Neo4J, JanusGraph, TigerGraph, the following technical discussions pertain to graph database that is part of SQL server 2019. The main advantage of this approach is that it helps utilize the best RDBMS features wherever applicable, while keeping the graph database options for complex relationships like 360 degree view of patients, making it a true polyglot persistence architecture.

As mentioned above, in SQL Server 2019 a graph database is a collection of node tables and edge tables. A node table represents an entity in a graph schema. An edge table represents a relationship in a graph. Edges are always directed and connect two nodes. An edge table enables users to model many-to-many relationships in the graph. Normal SQL Insert statements are used to create records into both node and edge tables.

While the node tables and edge tables represent the storage of graph data there are some specialized commands which act as extension of SQL and help with traversing between the nodes to get the full details like patient 360 degree data.

MATCH statement

MATCH statement links two node tables through a link table, such that complex relationships can be retrieved. An example,

Data Center Migration Planning Tools

SHORTEST_PATH statement

It finds the relationship path between two node tables by performing multiple hops recursively. It is one of the useful statements to find the 360 degree of a patient.

There are more options and statements as part of graph processing. Together it will help identify complex relationships across business entities and retrieve them.

GRAPH processing In Rhodium  

As mentioned in my earlier articles (Healthcare Data Sharing & Zero Knowledge Proofs in Healthcare Data Sharing), GAVS Rhodium framework enables Patient and Data Management and Patient Data Sharing and graph databases play a major part in providing patient 360 as well as for provider (doctor) credentialing data. The below screen shots show the samples from reference implementation.

Desktop-as-a-Service (DaaS) Solution

Patient Journey Mapping

Typically, a patient’s interaction with the healthcare service provider goes through a cycle of events. The goal of the provider organization is to make this journey smooth and provide the best care to the patients. It should be noted that not all patients go through this journey in a sequential manner, some may start the journey at a particular point and may skip some intermediate journey points. Proper data collection of events behind patient journey mapping will also help with the future prediction of events which will ultimately help with patient care.

Patient 360 data collection plays a major role in building the patient journey mapping. While there could be multiple definitions, the following is one of the examples of mapping between patient 360-degree events and patient journey mapping.

Digital Transformation Services and Solutions

The below diagram shows an example of a patient journey mapping information.

Enterprise IT Support Services USA

Understanding patients better is essential for improving patient outcomes. 360 degree of patients and patient journey mapping are key components for providing such insights. While traditional technologies lack the need of providing those links, graph databases and graph processing will play a major role in patient data management.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

IAST: A New Approach to Finding Security Vulnerabilities

Roberto Velasco
CEO, Hdiv Security

One of the most prevalent misconceptions about cybersecurity, especially in the mainstream media and also among our clients, is that to conduct a successful attack against an IT system it is necessary to ‘investigate’ and find a new defect in the target’s system.

However, for most security incidents involving internet applications, it is enough to simply exploit existing and known programming errors.

For instance, the dramatic Equifax breach could have been prevented by following basic software security best-practices, such as patching the system to prevent known vulnerabilities. That was, in fact, one of the main takeaways from the forensic investigation led by the US federal government.

One of the most important ways to reduce security risks is to ensure that all known programming errors are corrected before the system is exposed to internet traffic. Research bodies such as the US NIST found that correcting security bugs early on is orders of magnitude cheaper than doing so when the development has been completed.

When composing a text in a text editor, the spelling and grammar corrector highlights the mistakes in the text. Similarly, there are security tools known as AST (Application Security Testing) that find programming errors that introduce security weaknesses. ASTs report the file and line where the vulnerability is located, in the same way, that a text editor reports the page and the line that contains a typo.

In other words, these tools allow developers to build software that is largely free of security-related programming errors, resulting in more secure applications.

Just like it is almost impossible to catch all errors in a long piece of text, most software contains many serious security vulnerabilities. The fact that some teams do not use any automated help at all, makes these security weaknesses all the most prevalent and easy to exploit.

Let’s take a look at the different types of security issue detection tools also known as ASTs, or vulnerability assessment tools, available in the market.

The Traditional Approach

Two mature technologies capture most of the market: static code analysis (SAST) and web scanners (dynamic analysis or DAST). Each of these two families of tools is focused on a different execution environment.

The SAST static analysis, also known as white-box analysis because the tool has access to the source code of the application, scans the source code looking for known patterns that indicate insecure programming that could lead to a vulnerability.

The DAST dynamic analysis replicates the view of an attacker. At this point, the tool executes hundreds or thousands of queries against the application designed to replicate the activity of an attacker to find security vulnerabilities. This is a black-box analysis because the point of view is purely external, with no knowledge of the application’s internal architecture.

The level of detail provided by the two types of tools is different. SAST tools provide file and line where the vulnerability is located, but no URL, while DAST tools provide the external URL, but no details on the location of the problem within the code base of the application. Some teams use both tools to improve visibility, but this requires long and complex triaging to manage the vulnerabilities.

The Interactive AST Approach

The Interactive Application Security Testing (IAST) tools combine the static approach and the dynamic approach. They have access to the internal structure of the application, and to the way it behaves with actual traffic. This privileged point of view is ideal to conduct security analysis.

From an architecture point of view, the IAST tools become part of the infrastructure that hosts the web applications, because an IAST runs together with the application server. This approach is called instrumentation, and it is implemented by a component known as an agent. Other platforms such as Application Performance Monitoring tools (APMs) share this proven approach.

Once the agent has been installed, it incorporates automatic security sensors in the critical execution points of the application. These sensors monitor the dataflow between requests and responses, the external components that the application includes, and data operations such as database access. This broad-spectrum coverage is much better than the visibility that SAST and DAST rely on.

In terms of specific results, we can look at two important metrics – how many types of vulnerabilities the tool finds, and how many of the identified vulnerabilities are false positives. Well, the best DAST is able to find only 18% of the existing vulnerabilities on a test application. And even worse, around 50% of the vulnerabilities reported by the best SAST static analysis tool are not true problems!

IT Automation with AI

Source: Hdiv Security via OWASP Benchmark public result data

The IAST approach provides these tangible benefits:

  1. Complete coverage, because the entire application is reviewed, both the custom code and the external code, such as open-source components and legacy dependencies.
  2. Flexibility, because it can be used in all environments; development, quality assurance (QA), and production.
  3. High accuracy, because the combination of static and dynamic point of views allow us to find more vulnerabilities with no false positives.
  4. Complete vulnerability information, including the static aspects (source code details) and dynamic aspects (execution details).
  5. Reduction of the duration of the security verification phase, so that the time-to-market of the secure applications is shorter.
  6. Compatible with agile development methodologies, such as DevSecOps, because it can be easily automated, and reduces the manual verification activities

IAST tool can add tons of value to the security tooling of any organization concerned with the security of the software.

In the same way that everyone uses an automated spell checker to find typos in a document, we believe that any team would benefit from an automated validation of the security of an application.

However, the AST does not represent a security utopia, since they can only detect security problems that follow a common pattern.

About the Author –

Roberto Velasco is the CEO of Hdiv Security. He has been involved with the IT and security industry for the past 16 years and is experienced in software development, software architecture and application security across different sectors such as banking, government and energy. Prior to founding Hdiv Security, Roberto worked for 8 years as a software architect and co-founded ARIMA, a company specialized in software architecture. He regularly speaks at Software Architecture and cybersecurity conferences such as Spring I/O and APWG.eu.