The DNA of a Good Leader (PART I)

Rajeswari S

In our lives, we would have come across some people with great leadership qualities. They may not be leading a team, or an organization, but they exude an aura. They conduct themselves in a manner that sets them apart from the rest. As the debate rages on whether leaders are born, made, discovered, innovated, invented!? Let’s see what makes a person a true and admirable leader.

Generally, a good leader should be successful, progressive, and positive, must possess good personality traits, communication and delegation skills, charisma, agility, adaptability, and ability to transform the air around them by effecting positive changes.

Some people are able to bring out the best in others and that is the edge they have over others. So, let’s look beyond and list out those qualities that makes a person or YOU a quintessential leader.

  1. Be passionate: Obviously, you would think it is the dedication, commitment for one’s work to up the number of clients, revenue figures, etc. However, it is not just about that. The passion that you have which affects not only your attitude and energy but that of those around you. Your passion should spread like a wildfire and inspire action and positive change among others.

  1. Face obstacles with grace: If any leader knows exactly what a customer or market truly wants from the business, they would be hailed as no less than a God! But alas, life is always full of obstacles, and a true leader knows which battles to fight and how. Effective leaders approach roadblocks with a high level of positivity and maturity. They adopt creative problem-solving techniques that allows them to overcome situations that others might give up on.
  1. Allow honest mistakes, spot talents: An over-protected child learns nothing and cannot sail against the tides. A good leader allows their people to just GO FOR IT! Failure often provides us with some of life’s biggest learning opportunities. As uncertainty and risk are inherent to running a team or business. Some people do commendable jobs under high pressure situations. A good leader spots such resources in their team and makes the best use of their qualities.
  1. Be street smart: It’s hard to find a substitute for old-fashioned street smarts. Knowing how to trust your gut, quickly analyzing situations as well as the people you’re dealing with and knowing how-to spot a bad deal or scammer is an important aspect of leadership. Maturity and experience complement each other, and a perfect combination of this makes a great leader.
  1. Be intuitive and take ownership: Intuition is to art as logic is to math. Leadership is often about following your gut instinct. It can be difficult to let go of logic in some situations but learn to trust yourself. Having said that, if your instinct fails, leadership is also about taking ownership for what happened, learning lessons from it and NEVER TO REPEAT THE SAME MISTAKE.
  1. Understand opportunity cost: Leaders know that many situations and decisions in business involve risk and there is an opportunity cost associated with every decision you make. An opportunity cost is the cost of a missed opportunity. This is usually defined in terms of money, but it may also be considered in terms of time, man-hours, or any other finite resource. Great leaders understand the consequences of their decisions before making them.
  1. Be liked: You can respect a person who talks flamboyantly, has a brilliant mind, impeccable manners, and business skills, but do you LIKE them? A leader should not only be respected but they should also be liked. Liking a person is a not a quantifiable quality, is it? But, it can be achieved in the way a leader captains the team, spreads a positive feeling among them and make the group feel that they belong there.
  1. Laugh: Yes…you read it right. The proven routes to a person’s mind or heart is a healthy sense of humor. It works well in getting the best out of your team. Nobody likes a templated talk or expression, even if it is good news you are trying to convey. Also, effective leaders can laugh at themselves as they understand that they are also humans and can make mistakes like everyone else. Leaders who take themselves too seriously risk alienating people.

Unique brands of Leadership

A quick look at some successful CEOs, new-age entrepreneurs, and their unique leadership mantras:

  1. Satya Nadella, CEO, Microsoft

Leadership mantra: 

  • An avid reader
  • Looks beyond the Horizon
  • Makes the right move at the right time
  • Makes every second count
  • Nurture strong company culture 
  1. Nitin Saluja and Raghav Verma, Founder, Chaayos, fastest growing tea startup of India,

Leadership mantra: Give people wings to fly and they will carve out their own journey.

  1. Mukesh Ambani, Chairman & Managing director, Reliance Industries Ltd

Leadership mantra:

  • Money is not everything but important
  • Have a dream and plan to fulfill it
  • Let your work speak for itself  
  • Trust your instincts
  • Trust all, but depend on none

References:

  • https://briandownard.com,
  • https://economictimes.indiatimes.com

About the Author –

Working in IP, into Content Development with 13 years of Technical, Content and Creative Writing background. Off-work, passionate about singing, music, creative writing; love highway drive, a movie buff.

Patient 360 & Journey Mapping using Graph Technology

Srinivasan Sundararajan

360 Degree View of Patient

With rising demands for quality and cost-effective patient care, healthcare providers are focusing on data-driven diagnostics while continuing to utilize their hard-earned human intelligence. In other words, data-driven healthcare is augmenting human intelligence.

360 Degree View of Patient, as it is called, plays a major role in delivering the required information to the providers. It is a unified view of all the available information about a patient. It could include but is not limited to the following information:

  • Appointments made by the patients
  • Interaction with different doctors
  • Medications prescribed by the doctors
  • Patient’s relationship to other patients within the eco-systems specially to identify the family history related risks
  • Patient’s admission to hospitals or other healthcare facilities
  • Discharge and ongoing care
  • Patient personal wellness activities
  • Patient billing and insurance information
  • Linkages to the same patient in multiple disparate databases within the same hospital
  • Information about a patient’s involvement in various seminars, medical-related conferences, and other events

Limitations of Current Methods

As evident in most hospitals, these information are usually scattered across multiple data sources/databases. Hospitals typically create a data warehouse by consolidating information from multiple resources and try to create a unified database. However, this approach is done using relational databases and the relational databases rely on joining tables across entities to arrive at a complete picture. The RDBMS is not meant to handle relationships which extend to multiple hops and require drilling down to many levels.

Role of Graph Technology & Graph Databases

A graph database is a collection of nodes (or entities typically) and edges (or relationships). A node represents an entity (for example, a person or an organization) and an edge represents a relationship between the two nodes that it connects (for example, friends). Both nodes and edges may have properties associated with them.

While there are multiple graph databases in the market today like, Neo4J, JanusGraph, TigerGraph, the following technical discussions pertain to graph database that is part of SQL server 2019. The main advantage of this approach is that it helps utilize the best RDBMS features wherever applicable, while keeping the graph database options for complex relationships like 360 degree view of patients, making it a true polyglot persistence architecture.

As mentioned above, in SQL Server 2019 a graph database is a collection of node tables and edge tables. A node table represents an entity in a graph schema. An edge table represents a relationship in a graph. Edges are always directed and connect two nodes. An edge table enables users to model many-to-many relationships in the graph. Normal SQL Insert statements are used to create records into both node and edge tables.

While the node tables and edge tables represent the storage of graph data there are some specialized commands which act as extension of SQL and help with traversing between the nodes to get the full details like patient 360 degree data.

MATCH statement

MATCH statement links two node tables through a link table, such that complex relationships can be retrieved. An example,

Data Center Migration Planning Tools

SHORTEST_PATH statement

It finds the relationship path between two node tables by performing multiple hops recursively. It is one of the useful statements to find the 360 degree of a patient.

There are more options and statements as part of graph processing. Together it will help identify complex relationships across business entities and retrieve them.

GRAPH processing In Rhodium  

As mentioned in my earlier articles (Healthcare Data Sharing & Zero Knowledge Proofs in Healthcare Data Sharing), GAVS Rhodium framework enables Patient and Data Management and Patient Data Sharing and graph databases play a major part in providing patient 360 as well as for provider (doctor) credentialing data. The below screen shots show the samples from reference implementation.

Desktop-as-a-Service (DaaS) Solution

Patient Journey Mapping

Typically, a patient’s interaction with the healthcare service provider goes through a cycle of events. The goal of the provider organization is to make this journey smooth and provide the best care to the patients. It should be noted that not all patients go through this journey in a sequential manner, some may start the journey at a particular point and may skip some intermediate journey points. Proper data collection of events behind patient journey mapping will also help with the future prediction of events which will ultimately help with patient care.

Patient 360 data collection plays a major role in building the patient journey mapping. While there could be multiple definitions, the following is one of the examples of mapping between patient 360-degree events and patient journey mapping.

Digital Transformation Services and Solutions

The below diagram shows an example of a patient journey mapping information.

Enterprise IT Support Services USA

Understanding patients better is essential for improving patient outcomes. 360 degree of patients and patient journey mapping are key components for providing such insights. While traditional technologies lack the need of providing those links, graph databases and graph processing will play a major role in patient data management.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.

IAST: A New Approach to Finding Security Vulnerabilities

Roberto Velasco
CEO, Hdiv Security

One of the most prevalent misconceptions about cybersecurity, especially in the mainstream media and also among our clients, is that to conduct a successful attack against an IT system it is necessary to ‘investigate’ and find a new defect in the target’s system.

However, for most security incidents involving internet applications, it is enough to simply exploit existing and known programming errors.

For instance, the dramatic Equifax breach could have been prevented by following basic software security best-practices, such as patching the system to prevent known vulnerabilities. That was, in fact, one of the main takeaways from the forensic investigation led by the US federal government.

One of the most important ways to reduce security risks is to ensure that all known programming errors are corrected before the system is exposed to internet traffic. Research bodies such as the US NIST found that correcting security bugs early on is orders of magnitude cheaper than doing so when the development has been completed.

When composing a text in a text editor, the spelling and grammar corrector highlights the mistakes in the text. Similarly, there are security tools known as AST (Application Security Testing) that find programming errors that introduce security weaknesses. ASTs report the file and line where the vulnerability is located, in the same way, that a text editor reports the page and the line that contains a typo.

In other words, these tools allow developers to build software that is largely free of security-related programming errors, resulting in more secure applications.

Just like it is almost impossible to catch all errors in a long piece of text, most software contains many serious security vulnerabilities. The fact that some teams do not use any automated help at all, makes these security weaknesses all the most prevalent and easy to exploit.

Let’s take a look at the different types of security issue detection tools also known as ASTs, or vulnerability assessment tools, available in the market.

The Traditional Approach

Two mature technologies capture most of the market: static code analysis (SAST) and web scanners (dynamic analysis or DAST). Each of these two families of tools is focused on a different execution environment.

The SAST static analysis, also known as white-box analysis because the tool has access to the source code of the application, scans the source code looking for known patterns that indicate insecure programming that could lead to a vulnerability.

The DAST dynamic analysis replicates the view of an attacker. At this point, the tool executes hundreds or thousands of queries against the application designed to replicate the activity of an attacker to find security vulnerabilities. This is a black-box analysis because the point of view is purely external, with no knowledge of the application’s internal architecture.

The level of detail provided by the two types of tools is different. SAST tools provide file and line where the vulnerability is located, but no URL, while DAST tools provide the external URL, but no details on the location of the problem within the code base of the application. Some teams use both tools to improve visibility, but this requires long and complex triaging to manage the vulnerabilities.

The Interactive AST Approach

The Interactive Application Security Testing (IAST) tools combine the static approach and the dynamic approach. They have access to the internal structure of the application, and to the way it behaves with actual traffic. This privileged point of view is ideal to conduct security analysis.

From an architecture point of view, the IAST tools become part of the infrastructure that hosts the web applications, because an IAST runs together with the application server. This approach is called instrumentation, and it is implemented by a component known as an agent. Other platforms such as Application Performance Monitoring tools (APMs) share this proven approach.

Once the agent has been installed, it incorporates automatic security sensors in the critical execution points of the application. These sensors monitor the dataflow between requests and responses, the external components that the application includes, and data operations such as database access. This broad-spectrum coverage is much better than the visibility that SAST and DAST rely on.

In terms of specific results, we can look at two important metrics – how many types of vulnerabilities the tool finds, and how many of the identified vulnerabilities are false positives. Well, the best DAST is able to find only 18% of the existing vulnerabilities on a test application. And even worse, around 50% of the vulnerabilities reported by the best SAST static analysis tool are not true problems!

IT Automation with AI

Source: Hdiv Security via OWASP Benchmark public result data

The IAST approach provides these tangible benefits:

  1. Complete coverage, because the entire application is reviewed, both the custom code and the external code, such as open-source components and legacy dependencies.
  2. Flexibility, because it can be used in all environments; development, quality assurance (QA), and production.
  3. High accuracy, because the combination of static and dynamic point of views allow us to find more vulnerabilities with no false positives.
  4. Complete vulnerability information, including the static aspects (source code details) and dynamic aspects (execution details).
  5. Reduction of the duration of the security verification phase, so that the time-to-market of the secure applications is shorter.
  6. Compatible with agile development methodologies, such as DevSecOps, because it can be easily automated, and reduces the manual verification activities

IAST tool can add tons of value to the security tooling of any organization concerned with the security of the software.

In the same way that everyone uses an automated spell checker to find typos in a document, we believe that any team would benefit from an automated validation of the security of an application.

However, the AST does not represent a security utopia, since they can only detect security problems that follow a common pattern.

About the Author –

Roberto Velasco is the CEO of Hdiv Security. He has been involved with the IT and security industry for the past 16 years and is experienced in software development, software architecture and application security across different sectors such as banking, government and energy. Prior to founding Hdiv Security, Roberto worked for 8 years as a software architect and co-founded ARIMA, a company specialized in software architecture. He regularly speaks at Software Architecture and cybersecurity conferences such as Spring I/O and APWG.eu.

Post – Pandemic Recruiting Practices

Prabhakar Kumar Mandal

The COVID pandemic has transformed business as we know it. This includes recruitment. Right from the pre-hire activities to the post-hire ones, no hiring practices will be exempt from change we’re witnessing. To maintain a feasible talent acquisition program now and in the coming years, organizations face a persistent need to reimagine the way they do things at every step of the hiring funnel. 

Enterprise IT Support Services USA

In my perspicacity, following are the key aspects to look at:

1. Transforming Physical Workspaces

Having employees be physically present at workplace is fraught with challenges now. We envision many companies transitioning into a fully or partially remote workforce to save on costs and give employees more flexibility.

This means companies that maintain a physical headquarter will be paying much closer attention to the purpose those spaces really serve—and so will the candidates. The emphasis now will be on spaces of necessity—meeting areas, spaces for collaborative work, and comfortable, individual spaces for essential workers who need to be onsite. 

2. Traveling for interviews will be an obsolete

It’s going to be a while before non-essential travel assumes its pre-corona importance. In a study of traveler attitudes spanning the U.S., Canada, the U.K., and Australia, the portion of people who said they intended to restrict their travel over the next year increased from 24% in the first half of March to 40% in the second half of March.

Candidates will be less willing than they once were to jump on a plane for an in-person interview when a video conference is a viable alternative. 

3. Demand for workers with cross-trained skills will increase

Skills-based hiring has been on the rise now and will keep increasing as businesses strive to do more with a lesser headcount. We anticipate organizations to increasingly seek out candidates who can wear multiple hats. 

Additionally, as machines take on more jobs that were once reserved for people, we will see even greater demand for uniquely human skills like problem solving and creative thinking. Ravi Kumar, president of Infosys Ltd., summed it up perfectly in an interview with Forbes: “machines will handle problem-solving and humans will focus on problem finding.” 

4. Recruiting events will look a lot different 

It’s unclear when large-scale, in-person gatherings like job fairs will be able to resume, but it will likely be a while. We will likely see most events move to a virtual model, which will not only reduce risk but significantly cut costs for those involved. This may open new opportunities to allocate that budget to improve some of the other pertinent recruiting practices on this list. 

Digital Transformation Services and Solutions

5. Time to hire may change dramatically

The current approach is likely to change. For example, that most people who took a new job last year were not searching for one: Somebody came and got them. Businesses seek to fill their recruiting funnel with as many candidates as possible, especially ‘passive candidates’, who are not looking to move. Frequently employers advertise jobs that do not exist, hoping to find people who might be useful later or in a different framework. We are always campaigning the importance of minding our recruiting metrics, which can help us not only to hire more competently but identify interruptions in our recruiting process.

Are there steps in the hiring process, like screening or onboarding, that can be accelerated to balance things out? Are there certain recruitment channels that typically yield faster hires than others that can be prioritized? These are important questions to ask as you analyze the pandemic’s impacts to your hiring funnel. 

6. How AI can be leveraged to screen candidates?

AI is helping candidates get matched with the right companies. There are over 100 parameters to assess the candidates. This reduces wastage of time, money, and resources. The candidates are marked on their core strengths. This helps the recruitment manager to place them in the apt role.

The current situation presents the perfect opportunity for companies to adopt new tools. Organizations can reassess their recruitment processes and strategies through HR-aligned technology.

Post-pandemic hiring strategy

This pertains more to the industries most impacted by the pandemic, like businesses in the hospitality sector, outdoor dining, and travel to name a few. Many of the applicants in this domain have chosen to make the shift towards more promising or booming businesses.

However, once the pandemic blows over and restrictions are lifted, you can expect suffering sectors to come back with major recruitment changes and fierce competition over top talent.

Companies that take this time to act by cultivating relationships and connections with promising talent in their sphere, will have the advantage of gathering valuable data from probable candidates.

About the Author –

Prabhakar is a recruiter by profession and cricketer by passion. His focus is on hiring for the infra verticle. He hails from a small town in Bihar was brought up in Pondicherry. Prabhakar has represented Pondicherry in U-19 cricket (National School Games). In his free time he enjoys reading, working on his health and fitness and spending time with his family and friends.

Quantum Computing

Vignesh Ramamurthy

Vignesh Ramamurthy

In the MARVEL multiverse, Ant-Man has one of the coolest superpowers out there. He can shrink himself down as well as blow himself up to any size he desires! He was able to reduce to a subatomic size so that he could enter the Quantum Realm. Some fancy stuff indeed.

Likewise, there is Quantum computing. Quantum computers are more powerful than supercomputers and tech companies like Google, IBM, and Rigetti have them.

Google had achieved Quantum Supremacy with its Quantum computer ‘Sycamore’ in 2019. It claims to perform a calculation in 200 seconds which might take the world’s most powerful supercomputer 10,000 years. Sycamore is a 54-qubit computer. Such computers need to be kept under special conditions with temperature being close to absolute zero.

quantum computing

Quantum Physics

Quantum computing falls under a discipline called Quantum Physics. Quantum computing’s heart and soul resides in what we call as Qubits (Quantum bits) and Superposition. So, what are they?

Let’s take a simple example, imagine you have a coin and you spin it. One cannot know the outcome unless it falls flat on a surface. It can either be a head or a tail. However, while the coin is spinning you can say the coin’s state is both heads and tails at the same time (qubit). This state is called Superposition.

So, how do they work and what does it mean?

We know bits are a combination of 0s and 1s (negative or positive states). Qubits have both at the same time. These qubits, in the end, pass through something called “Grover Operator” which washes away all the possibilities, but one.

Hence, from an enormous set of combinations, a single positive outcome remains, just like how Doctor Strange did in the movie Infinity War. However, what is important is to understand how this technically works.

We shall see 2 explanations which I feel could give an accurate picture on the technical aspect of it.

In Quantum Mechanics, the following is as explained by Scott Aaronson, a Quantum scientist from the University of Texas, Austin.

Amplitude – an amplitude of a positive and a negative state. These could also be considered as an amplitude for being 0, and also an amplitude for being 1. The goal for an amplitude here is to make sure that amplitudes leading to wrong answers cancel each other out. Hence this way, amplitude with the right answer remains the only possible outcome.

Quantum computers function using a process called superconductivity. We have a chip the size of an ordinary computer chip. There are little coils of wire in the chip, nearly big enough to see with the naked eye. There are 2 different quantum states of current flowing through these coils, corresponding to 0 and 1, or the superpositions of them.

These coils interact with each other, nearby ones talk to each other and generate a state called an entangled state which is an essential state in Quantum computing. The way qubits interact are completely programmable, so we can send electrical signals to these qubits, and tweak them according to our requirements. This whole chip is placed in a refrigerator with a temperature close to absolute zero. This way superconductivity occurs which makes it to briefly behave as qubits.

Following is the explanation given according to ‘Kurzgesagt — In a Nutshell’, a YouTube channel.

We know a bit is either a 0 or 1. Now, 4 bits mean 0000 and so on. In a qubit, 4 classical bits can be in one of the 2^4 different configurations at once. That is 16 possible combinations out of which we can use just one. 4 qubits in position can be in all those 16 combinations at once.

This grows exponentially with each extra qubit. 20 qubits can hence store a million values in parallel. As seen, these entangled states interact with each other instantly. Hence while measuring one entangled qubit, we can directly deduce the property of its partners.

A normal logic gate gets a simple set of inputs and produces one definite output. A quantum gate manipulates an input of superpositions, rotates probabilities, and produces another set of superpositions as its output.

Hence a quantum computer sets up some qubits, applies quantum gates to entangle them, and manipulates probabilities. Now it finally measures the outcome, collapsing superpositions to an actual sequence of 0s and 1s. This is how we get the entire set of calculations performed at the same time.

What is a Grover Operator?

We now know that while taking one entangled qubit, it is possible to easily deduce properties for all the partners. Grover algorithm works because of these quantum particles being entangled. Since one entangled qubit is able to vouch for the partners, it iterates until it finds the solution with higher degrees of confidence.

What can they do?

As of now, quantum computing hasn’t been implemented in real-life situations just because the world right now doesn’t have such an infrastructure.

Assuming they are efficient and ready to be used. We can make use of it in the following ways: 1) Self-driving cars are picking up pace. Quantum computers can be used on these cars by calculating all possible outcomes on the road. Apart from sensors to reduce accidents, roads consist of traffic signals. A Quantum computer will be able to go through all the possibilities of how traffic signals

function, the time interval, traffic, everything, and feed these self-driving cars with the single best outcome accordingly. Hence, what would result is nothing but a seamless commute with no hassles whatsoever. It’ll be the future as we see in movies.

2) If AI is able to construct a circuit board after having tried everything in the design architecture, this could result in promising AI-related applications.

Disadvantages

RSA encryption is the one that underpins the entire internet. It could breach it and hackers might steal top confidential information related to Health, Defence, personal information, and other sensitive data. At the same time, it could be helpful to achieve the most secure encryption, by identifying the best one amongst every possible encryption. This can be made by finding out the most secure wall to break all the viruses that could infect the internet. If such security is made, it would take a completely new virus to break it. But the chances are very minuscule.

Quantum computing has its share of benefits. However, this would take years to be put to use. Infrastructure and the amount of investment to make is humongous. After all, it could only be used when there are very reliable real-time use cases. It needs to be tested for many things. There is no doubt that Quantum Computing will play a big role in the future. However, with more sophisticated technology, comes more complex problems. The world will take years to be prepared for it.

References:

About the Author –

Vignesh is part of the GAVel team at GAVS. He is deeply passionate about technology and is a movie buff.

Reduce Test Times and Increase Coverage with AI & ML

Kevin Surace

Chairman & CTO, Appvance.ai

With the need for frequent builds—often many times in a day—QEs can only keep pace through AI-led testing. It is the modern approach that allows quality engineers to create scripts and run tests autonomously to find bugs and provide diagnostic data to get to the root cause.

AI-driven testing means different things to different QA engineers. Some see it as using AI for identifying objects or helping create script-less testing; some consider it as autonomous generation of scripts while others would think in terms of leveraging system data to create scripts which mimic real user activity.

Our research shows that teams who are able to implement what they can in scripts and manual testing have, on average, less than 15% code, page, action, and likely user flow coverage. In essence, even if you have 100% code coverage, you are likely testing less than 15% of what users will do. That in itself is a serious issue.

Starting in 2012, Appvance set out to rethink the concept of QA automation. Today our AIQ Technology combines tens of thousands of hours of test automation machine learning with the deep domain knowledge, the essential business rules, each QE specialist knows about their application. We create an autonomous expert system that spawns multiple instances of itself that swarm over the application testing at the UX and at the API-levels. Along the way these Intelligences write the scripts, hundreds, and thousands of them, that describes their individual journeys through the application.

And why would we need to generate so many tests fully autonomously. Because applications today are 10X the size they were just ten years ago. But your QE team doesn’t have 10X the number of test automation engineers. And because you have 10X less time to do the work than 10 years ago. Just to keep pace with the dev team requires each quality engineer to be 100X more productive than they were 10 years ago.

Something had to change; that something is AI.

AI-testing in two steps

We leveraged AI and witnessed over 90% reduction in human effort to find the same bugs. So how does this work?

It’s really a two-stage process.

First, leveraging key AI capabilities in TestDesigner, Appvance’s codeless test creation system, we make it possible to write scripts faster, identify more resilient accessors, and substantially reduce maintenance of scripts.

With AI alongside you as you implement an automated test case, you get a technology that suggests the most stable accessors and constantly improves and refines them. It also creates “fallback accessors” when tests run and hit an accessor change enabling the script to continue even though changes have been made to the application. And finally, the AI can self-heal scripts which must and update them with new accessors without human assistance. These AI-based, built-in technologies give you the most stable scripts every time with the most robust accessor methodologies and self-healing. Nothing else comes close.

The final two points above deal with autonomous generation of tests. To beat the queue and crush it, you have to get a heavy lift for finding bugs. And as we have learnt, go far beyond the use cases that a business analyst listed. Job one is to find bugs and prioritize them, leveraging AI to generate tests autonomously.

Appvance’s patented AI engine has already been trained with millions of actions. You will teach it the business rules of your application (machine learning). It will then create real user flows, take every possible action, discover every page, fill out every form, get to every state, and validate the most critical outcomes just as you trained it to do. It does all this without writing or recording a single script. We call this is ‘blueprinting’ an application. We do this at every new build. Multiple instances of the AI will spin up, each selecting a unique path through the application, typically finding 1000s or more flows in a matter of minutes. When complete, the AI hands you the results including bugs, all the diagnostic data to help find the root cause, and the reusable test-scripts to repeat the bug. A further turn of the crank can refine these scripts into exact replicas of what production users are doing and apply them to the new build. Any modern approach to continuous testing needs to leverage AI in both helping QA engineers create scripts as well as autonomously create tests so that both parts work together to find bugs and provide data to get to the root cause. That AI driven future is available today from Appvance.

About the Author –

Kevin Surace is a highly lauded entrepreneur and innovator. He’s been awarded 93 worldwide patents, and was Inc. Magazine Entrepreneur of the Year, CNBC Innovator of the Decade, a Davos World Economic Forum Tech Pioneer, and inducted into the RIT Innovation Hall of Fame. Kevin has held leadership roles with Serious Energy, Perfect Commerce, CommerceNet and General Magic and is credited with pioneering work on AI virtual assistants, smartphones, QuietRock and the Empire State Building windows energy retrofit.

Business Intelligence Platform RESTful Web Service

Albert Alan

Restful API

RESTful Web Services are REST architecture based web services. Representational State Transfer (REST) is a style of software architecture for distributed systems such as the World Wide Web. In this architectural style, data and functionality is considered resources and are accessed using Uniform Resource Identifiers (URIs), typically links on the Web.

RESTful Web Service

REST has some advantages over SOAP (Simple Objects Access Protocol) but is similar in technology since it is also a function call via HTTP protocol. REST is easier to call from various platforms, transfers pure human-readable data in JSON or XML and is faster and saves resources.

In the basic idea of REST, an object is accessed via REST, not its methods. The state of the object can be changed by the REST access. The change is caused by the passed parameters. A frequent application is the connection of the SAP PI via the REST interface.

When to use Rest Services

  • You want to access BI platform repository objects or perform basic scheduling.
  • You want to use a programming language that is not supported by another BI platform SDK.
  • You want to extract all the query details and number of records per query for all the reports like Webi and Crystal, etc.
  • You want to extract folder path of all reports at once.

Process Flow

RESTful Web Service

RESTful Web Service Requests

To make a RESTful web service request, you need the following:

  • URL – The URL that hosts the RESTful web service.
  • Method – The type of HTTP method to use for sending the request, for example GET, PUT, POST, or DELETE.
  • Request header – The attributes that describe the request.
  • Request body – Additional information that is used to process the request.

Common RWS Error Messages

RESTful Web Service

Restful Web Service URIs Summary List

URLResponseComments
  /v1Service document that contains a link to the /infostore API.This is the root level of an infostore resource
  /v1/infostoreFeed contains all the objects in BOE system/v1/infostore
  /v1/infostore/ <object_id>Entry corresponding to the info object with SI_ID=./v1/infostore/99
      /v1/logon/longReturns the long form for logon, which contains the user and password authentication template.Used to logon to the BI system based on the authentication method.
  /v1/users/ <user_id>  XML feed of user details in BOE systemYou can Modify user using PUT method and DELETE user using DELETE method.
    /v1/usergroups/ <usergroup_id>    XML feed of user group details in BOE systemSupport GET and PUT and DELETE method. You can Modify user group using PUT method and DELETE user group using DELETE method.
  v1/folders/ <folder_id>XML feed displays the details of the folder, can be used to modify the details of the folder, and delete the folder.You modify the folder using PUT method and DELETE the folder using DELETE method
  /v1/publicationsXML feed of all publications created in BOE systemThis API supports GET method only.

Extended Workflow

 The workflow is as follows:

  • To Pass the Base URL

GET http:///localhost:6405/biprws/v1/users

  • To Pass the Headers

  • To Get the xml/json response

Automation of Rest Call

The Business Intelligence platform RESTful Web Service  (BI-REST-SDK) allows you to programmatically access the BI platform functionalities such as administration, security configuration and modification of the repository. In addition, to the Business Intelligence platform RESTful web service SDK, you can also use the SAP Crystal Reports RESTful Web Services  (CR REST SDK) and SAP Web Intelligence RESTful Web Services (WEBI REST SDK).

Implementation

An application has been designed and implemented using Java to automate the extraction of SQL query for all the webi reports from the server at once.

Tools used:

  • Postman (Third party application)
  • Eclipse IDE

The structure of the application is as below:

The application file comprises of the required java jar files, java class files, java properties files and logs. Java class files (SqlExtract) are the source code and will be compiled and executed using command prompt as:

Step 1

  • Javac -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command compiles the java code.

Step 2

  • Java -cp “.;java-json.jar;json-simple-1.1.jar;log4j-1.2.17.jar” SqlExtract.java

 The above command runs the compiled java file.

The java properties file (log4j) is used to set the configurations for the java code to run. Also, the path for the log file can be set in the properties file.

RESTful Web Service

The logs (SqlExtractLogger) consist of the required output file with all the extracted query for the webi reports along with the data source name, type and the row count for each query in the respective folder in the path set by the user in properties file.

RESTful Web Service

The application is standalone and can run in any windows platform or server which has java JRE (version greater than 1.6 – preferred) installed in it.

Note: All the above steps required to execute the application are consolidated in the (steps) file.

Conclusion

SAP BO provides Restful web service to traverse through its repository, to fetch structural info and to modify the metadata structure based on the user requirements. When integrated with programming languages like python, java, etc., extends the scope to a greater extent, allowing the user to automate the workflows and to solve the backtracking problems.

Handling Restful web service needs expertise in server administration and programming as changes made to the metadata are irreversible.

References

About the Author –

Alan is a SAP Business Intelligence consultant with a critical thinking and an analytical mind. He believes in ‘The more extensive a man’s knowledge of what has been done, the greater will be his power of knowing what to do’.

Enabling Success through Servant Leadership

Vasu

Vasudevan Gopalan

Servant Leadership – does it seem like a dichotomy? Well, it is not so. In this new age of Agile and Digital Transformation, this is a much sought-after trait in Leaders by their Organizations.

IT Infrastructure Managed Services

The goal of Servant Leadership is to Serve. It involves the leader supporting and empowering their teams and thus enabling Success. The paradigm shift in the thought process here is that – instead of the people working to serve the leader, the leader exists to serve the team. And do remember that a Servant Leader is a Servant first, Leader next – not the other way around 😊

In today’s Agile world of Software Delivery, the Scrum Master needs to be a Servant Leader.

So, what are the characteristics of a Servant Leader?

  • Self-aware
  • Humble
  • Integrity
  • Result-oriented
  • Has foresight
  • Listener
  • Doesn’t abuse authority
  • Intellectual authority
  • Collaborative
  • Trusting
  • Coach
  • Resolves conflict

As you can see here, it is all about achieving results through people empowerment. When people realize that their Leader helps every team member build a deep sense of community and belonging in the workplace, there is a higher degree of accountability and responsibility carried out in their work.

Ultimately, a Servant Leader wants to help others thrive, and is happy to put the team’s needs before their own. They care about people and understand that the best results are produced not through top-down delegation but by building people up. People need psychological safety and autonomy to be creative and innovative.

As Patrick Lencioni describes, Humility is one of the 3 main pillars for ideal team players. Humility is “the feeling or attitude that you have no special importance that makes you better than others”.

Behaviors of Humble Agile Servant Leaders

  • Deep listening and observing
  • Openness towards new ideas from team members
  • Appreciating strengths and contributions of team members
  • Seek contributions of team members to overcome challenges and limitations together
  • Be coachable coaches – i.e. Coach others, and simultaneously be easy to be coached by others

Humility’s foe – Arrogance

In Robert Hogan’s terms, arrogance makes “the most destructive leaders” and “is the critical factor driving flawed decision-makers” who “create the slippery slope to organizational failure”.

Humility in Practice

A study on the personality of CEOs of some of the top Fortune 1000 Companies shows that what makes these companies successful as they are is the CEOs’ humility. These CEOs share two sets of qualities seemingly contradictory but always back each other up strongly:

  • They are “self-effacing, quiet, reserved, even shy”. They are modest. And they admit mistakes.
  • At the same time, behind this reserved exterior, they are “fiercely ambitious, tremendously competitive, tenacious”. They have strong self-confidence and self-esteem. And they’re willing to listen to feedback and solicit input from knowledgeable subordinates.

According to Dr. Robert Hogan (2018), these characteristics of humility create “an environment of continuous improvement”.

What are the benefits of being a humble Servant Leader?

  • Increase inclusiveness – the foundation of trust
  • Strengthen the bond with peers – the basis of well-being
  • Deepen awareness
  • Improve empathy
  • Increase staff engagement

So, what do you think would be the outcomes for organizations that have practicing Servant Leaders?

Source:

https://www.bridge-global.com/blog/5-excellent-tips-to-become-a-supercharged-agile-leader/

About the Author –

Vasu heads the Engineering function for A&P. He is a Digital Transformation leader with ~20 years of IT industry experience spanning across Product Engineering, Portfolio Delivery, Large Program Management, etc. Vasu has designed and delivered Open Systems, Core Banking, Web / Mobile Applications, etc. Outside of his professional role, Vasu enjoys playing badminton and is a fitness enthusiast.

Zero Knowledge Proofs in Healthcare Data Sharing

Srinivasan Sundararajan

Recap of Healthcare Data Sharing

In my previous article (https://www.gavstech.com/healthcare-data-sharing/), I had elaborated on the challenges of Patient Master Data Management, Patient 360, and associated Patient Data Sharing. I had also outlined how our Rhodium framework is positioned to address the challenges of Patient Data Management and data sharing using a combination of multi-modal databases and Blockchain.

In this context, I have highlighted our maturity levels and the journey of Patient Data Sharing as follows:

  • Single Hospital
  • Between Hospitals part of HIE (Health Information Exchange)
  • Between Hospitals and Patients
  • Between Hospitals, Patients, and Other External Stakeholders

In each of the stages of the journey, I have highlighted various use cases. For example, in the third level of health data sharing between Hospitals and Patients, the use cases of consent management involving patients as well as monetization of personal data by patients themselves are mentioned.

In the fourth level of the journey, you must’ve read about the use case “Zero Knowledge Proofs”. In this article, I would be elaborating on:

  • What is Zero Knowledge Proof (ZKP)?
  • What is its role and importance in Healthcare Data Sharing?
  • How Blockchain Powered GAVS Rhodium Platform helps address the needs of ZKP?

Introduction to Zero Knowledge Proof

As the name suggests, Zero Knowledge Proof is about proving something without revealing the data behind that proof. Each transaction has a ‘verifier’ and a ‘prover’. In a transaction using ZKPs, the prover attempts to prove something to the verifier without revealing any other details to the verifier.

Zero Knowledge Proofs in Healthcare 

In today’s healthcare industry, a lot of time-consuming due diligence is done based on a lack of trust.

  • Insurance companies are always wary of fraudulent claims (which is anyhow a major issue), hence a lot of documentation and details are obtained and analyzed.
  • Hospitals, at the time of patient admission, need to know more about the patient, their insurance status, payment options, etc., hence they do detailed checks.
  • Pharmacists may have to verify that the Patient is indeed advised to take the medicines and give the same to the patients.
  • Patients most times also want to make sure that the diagnosis and treatment given to them are indeed proper and no wrong diagnosis is done.
  • Patients also want to ensure that doctors have legitimate licenses with no history of malpractice or any other wrongdoing.

In a healthcare scenario, either of the parties, i.e. patient, hospital, pharmacy, insurance companies, can take on the role of a verifier, and typically patients and sometimes hospitals are the provers.

While the ZKP can be applied to any of the transactions involving the above parties, currently the research in the industry is mostly focused on patient privacy rights and ZKP initiatives target more on how much or less of information a patient (prover) can share to a verifier before getting the required service based on the assertion of that proof.

Blockchain & Zero Knowledge Proof

While I am not getting into the fundamentals of Blockchain, but the readers should understand that one of the fundamental backbones of Blockchain is trust within the context of pseudo anonymity. In other words, some of the earlier uses of Blockchain, like cryptocurrency, aim to promote trust between unknown individuals without revealing any of their personal identities, yet allowing participation in a transaction.

Some of the characteristics of the Blockchain transaction that makes it conducive for Zero Knowledge Proofs are as follows:

  • Each transaction is initiated in the form of a smart contract.
  • Smart contract instance (i.e. the particular invocation of that smart contract) has an owner i.e. the public key of the account holder who creates the same, for example, a patient’s medical record can be created and owned by the patient themselves.
  • The other party can trust that transaction as long the other party knows the public key of the initiator.
  • Some of the important aspects of an approval life cycle like validation, approval, rejection, can be delegated to other stakeholders by delegating that task to the respective public key of that stakeholder.
  • For example, if a doctor needs to approve a medical condition of a patient, the same can be delegated to the doctor and only that particular doctor can approve it.
  • The anonymity of a person can be maintained, as everyone will see only the public key and other details can be hidden.
  • Some of the approval documents can be transferred using off-chain means (outside of the blockchain), such that participants of the blockchain will only see the proof of a claim but not the details behind it.
  • Further extending the data transfer with encryption of the sender’s private/public keys can lead to more advanced use cases.

Role of Blockchain Consortium

While Zero Knowledge Proofs can be implemented in any Blockchain platform including totally uncontrolled public blockchain platforms, their usage is best realized in private Blockchain consortiums. Here the identity of all participants is known, and each participant trusts the other, but the due diligence that is needed with the actual submission of proof is avoided.

Organizations that are part of similar domains and business processes form a Blockchain Network to get business benefits of their own processes. Such a Controlled Network among the known and identified organizations is known as a Consortium Blockchain.

Illustrated view of a Consortium Blockchain Involving Multiple Other Organizations, whose access rights differ. Each member controls their own access to Blockchain Network with Cryptographic Keys.

Members typically interact with the Blockchain Network by deploying Smart Contracts (i.e. Creating) as well as accessing the existing contracts.

Current Industry Research on Zero Knowledge Proof

Zero Knowledge Proof is a new but powerful concept in building trust-based networks. While basic Blockchain platform can help to bring the concept in a trust-based manner, a lot of research is being done to come up with a truly algorithmic zero knowledge proof.

A zk-SNARK (“zero-knowledge succinct non-interactive argument of knowledge”) utilizes a concept known as a “zero-knowledge proof”. Developers have already started integrating zk-SNARKs into Ethereum Blockchain platform. Zether, which was built by a group of academics and financial technology researchers including Dan Boneh from Stanford University, uses zero-knowledge proofs.

ZKP In GAVS Rhodium

As mentioned in my previous article about Patient Data Sharing, Rhodium is a futuristic framework that aims to take the Patient Data Sharing as a journey across multiple stages, and at the advanced maturity levels Zero Knowledge Proofs definitely find a place. Healthcare organizations can start experimenting and innovating on this front.

Rhodium Patient Data Sharing Journey

IT Infrastructure Managed Services

Healthcare Industry today is affected by fraud and lack of trust on one side, and on the other side growing privacy concerns of the patient. In this context, the introduction of a Zero Knowledge Proofs as part of healthcare transactions will help the industry to optimize itself and move towards seamless operations.

About the Author –

Srini is the Technology Advisor for GAVS. He is currently focused on Data Management Solutions for new-age enterprises using the combination of Multi Modal databases, Blockchain, and Data Mining. The solutions aim at data sharing within enterprises as well as with external stakeholders.