Thought Leadership

The Dangers of Groupthink in Regulated Industries

04 July 2024

We all experienced the shock of the recent Boeing fiasco and wondered how could this happen? Surely there are quality gates in place or someone who could have caught this? 

The fact is, it’s more shocking to see a door fly off a plane than be aware of a recalled medical product. Like, say, Amneal’s recalled ranitidine hydrochloride for high levels of N-nitrosodimethylamine (a probable human carcinogen). Maybe that doesn’t hit close enough to home. How about recalled vegetables for risk of listeria. Or peanut butter for risk of salmonella. Recall events in the pharmaceutical industry reached a 19-year high in 2023, rising 42.4% ,

And there usually are people who have seen these things coming and have actively tried to initiate corrections or improvements. However, there is an elephant in the room we need to address when it comes to the change-makers who want to help the company do better, do differently. That elephant is Groupthink. It’s large and in charge and it squishes everyone trying to steer the ship in a different, safer, more efficient direction. To be the target of this elephant hurts. It's soul crushing, isolating, frustrating.


What is groupthink?

It’s a psychological phenomenon that significantly impacts human behavior or attitudes by opting to conform to the consensus view rather than engage in critical thinking during decision making processes.

I’m not claiming that all the issues are directly related to groupthink however I do believe it’s a large contributor. 

It’s simply human nature to listen to views that feel good instead of listening to ideas that make us think hard. Add in corporate politics, pushback, personalities, procedural changes, testing, training, added work and lower budgets and there are no reasons to change and plenty to keep things exactly the same. Continuous improvement starts to look real costly and annoying. 

One of the problems is "cognitive laziness" (along with unquestioned beliefs, stereotyping, self-censorship and low self-esteem, to name a few). Research shows people prefer the ease of holding on to old views rather than grappling with new ideas (Adam Grant, Think Again 2021). Which closely aligns with the popular push-back phrases we often hear: “this is how we’ve always done it” and “this was not an issue in the last inspection (or audit or implementation or or or).”

And isn’t that what we’re doing? In refining our data governance frameworks to ensure data integrity: grappling with new ideas, trying to bring them to life with the help of others. We’re certainly never going to get everything right on the first go. That's not realistic. These ideas don’t just affect us in our department, they’re affecting the business, the execs, the system owners, data stewards, and everyone in between. 


So, how do we combat this pushback and sell new approaches that are vital for compliance, safety and the business in general?

We need cognitive flexibility/ agility within ourselves and the whole organization. The ability to move from one extreme idea to the other.

When we present our case (and never attempt this without actual data available) focus more on the “how” not “why”. We want to avoid resistance to the ideas and lack of creativity in the problem solving. Talking about the “why” makes us likely to double down on our beliefs vs. focusing on the “how would it work “which helps us talk through the process and discover weakness in implementation by talking all the viewpoints into consideration. It also helps everyone’s viewpoints feel heard and addressed. 

Our role is to hold a mirror to get people to examine their beliefs and get curious about alternate points of view. We do this by-


Don’t let groupthink stifle continuous improvement (or basic consumer safety). Patients and end users all over the world are counting on us, if not for innovation, then at the very least to prevent harm. 


The Formula for Data Governance Maturity

18 Jun 2024

Measuring your data governance maturity is crucial for continuous improvement and staying ahead in today's competitive landscape. It's the only way to ensure you're maximizing the value of your data and making informed business decisions.

The Formula for Data Governance Maturity

Data governance maturity can be simplified as a combination of three key elements:

The Benefits of Effective Data Governance

Effective data governance leads to a number of significant benefits, including:


Evaluating Data Governance Maturity

To assess your organization's data governance maturity, you can evaluate these key areas:

The Data Governance Maturity Spectrum

Refer to the chart above that details data governance maturity, ranging from level 1 to level 5.

> Level 1: Rudimentary, Reactive, and Unorganized: Teams are constantly struggling to fix data-related problems.

> Level 5: Smooth Sailing and Working in Unison: Departments collaborate effectively, and data governance is a well-oiled machine, freeing up resources for innovation and industry leadership.

The Importance of Inclusivity in Assessment

The key to a successful assessment is to involve everyone in the organization, not just a select few. This comprehensive approach ensures you identify all your data governance strengths and weaknesses, allowing you to pinpoint areas that need improvement to achieve your overall data governance goals.

Key Term Definitions


In a world where all our decisions are based on data collection and processing (even cognitively), let's make sure the foundation is sturdy. 

Avoiding Growing Pains: Essential Strategies for System Scalability (part 2 of 2)

24 April 2024

Welcome back to the sequel to getting our GxP systems to properly scale with organizational growth. In part 1 of the series, we discussed the 5 sins that inhibit scalability, ultimately causing increased risk, increased system incidences (“fires” that are continuously eating up resources) and contributing to interruptions of digitization efforts. So, let’s start talking about how to build in quality and assurance to ensure our systems keep up with our organization’s growth trajectory. 


Plan for in-depth discussions with the vendor about the recommended installation requirements. Have this information in multiple modes of communication and involve the technical teams (IT and OT) early in the process. Plan ahead to source and retrieve (supply chains are still volatile) any hardware or software or any network changes required. Dry run as much as possible. Bring the vendor onsite if required. Read and understand the design and system manuals. Ensure the system you are buying is current and will not be outdated in the near future (prevent premature obsolescence). Use the installation requirements as an opportunity to start updating your network design in manageable increments (an entire overhaul is usually not feasible).


Intimately understand the system and what it can offer you. Understand the details behind the myriad of settings and options available. Understand the risk of turning on or off each configuration or setting. Resist the urge to turn advanced settings off because it will cause a cascade of changes to how things are currently done (procedures, documentation, workflows and training). Evaluate the [high] risk of disabling data integrity and security settings in lieu of continuing the old, familiar, paper-based process. 


How can you understand the design of anything without breaking it down to the sum of its parts? Use unscripted testing to mimic panicked situations by the users and understand unexpected reactions, errors or load. Add in human factors, use the system outside of its scope and insert controls to mitigate as needed. Plan for the data lifecycle to grow and become more complex. Yes, currently maybe only 3 people will use it and generate little data but what happens if 50 people start using it?  Put in data and process protections in early: back-up, archive, disaster recovery, business continuity. Think ahead about potential data migration(s) and consider that during the design. Inject follows ups to the design through routine preventative maintenance activities. 


First, establish a robust data governance framework. Understand and classify your data; test and ensure its quality and reliability. Have an understanding and a plan for the system and data architecture, roles and responsibilities and data flow with the human factors included.  Declutter the data that holds no value or impact. Protect GxP data through role-based access control (RBAC) inside and outside the system, define clear and controlled processes, security, training, change management, incident handling and monitoring. 


Second, cultivate a culture of vigilance and sensitivity to breaks or errors in the data lifecycle.  Establish good digital hygiene within every person in the organization. Every person should have a deep understanding of their personal contribution to data security, quality and accuracy. They should be empowered to remain on the lookout for any breaks or lapses or threats to the process and have a clear understanding of reporting and investigating requirements as defined in the governance plan. 



Documentation is the rock on which everything stands. Avoid tribal knowledge by using thorough and frequent documentation. Ensure your data governance framework is clearly defined, documented and updated as needed. Ensure living documents stay living, an accurate reflection of the system in its current state. You can keep future plans and visions separate outlined in a plan. Ensure enough detail is included to bring any new team member up to date in diagrams, manuals, QMS entries, CMDB, SOPs and specifications. Keep living documents alive.  Ensure any metadata contained within Subject Matter Experts is written down in a clear and traceable manner, easily understood anytime it’s referenced.




Building scalable GxP systems requires a proactive approach that prioritizes quality from the outset. By carefully considering installation needs, understanding system capabilities, planning for growth, and establishing a strong data culture, organizations can ensure their systems can keep pace with their ambitions. Remember, a well-designed and well-maintained system is an investment that will pay off in the long run, reducing risks, streamlining operations, and supporting continued digital transformation.



Avoiding Growing Pains: Essential Strategies for System Scalability (part 1 of 2)

20 January 2024


One of the principles of “The 7 habits of highly effective people” is “Begin with the end in mind”. In pharmaceutical system procurement and implementation, “beginning with the end in mind” does not stop at what end user pain points the system is alleviating; it ends at the data and system lifecycle (which is data and system retirement). In this two-part series, we explore our automated systems as they relate to scalability and potential risks to data governance (part 1) and how to build quality into the design to reduce system incidences (part 2). 


What is scalability? 


Scalability is the measure of a system's ability to increase or decrease in performance in response to changes in application and system processing demands. The system remains productive regardless of an increase in traffic, users, and data.


One of the biggest problems I see time and time again, especially in small to midsize companies is the lack of scalability in their systems during initial planning and set up. The system is set up for immediate use and simply does not keep up with the company’s growth. 


Why does this happen?


Generally, newer systems have lots of capabilities with a robust build, but the system is implemented and set up in a limited manner, typically due to timelines, budgets, outdated network design and tight resources. This hasty setup does not scale up well. Meaning, as the organization, and thus demands on the system increases, so do the incidences, resource allocation, cost and downtime.


An example, you buy a ‘2023 Cadillac CT5-V Blackwing boasts a 668-horsepower twin-turbocharged V8 engine’ but your shop is set up for an ‘1970 Oldsmobile 442’ (and they don’t make those anymore!). So, you bought a top-of-the-line engine, but your networks, adjacent systems, hardware, processes and people are still accustomed to handling older, manual models.


 Here’s what we want to avoid:


1. Hasty Setups with Limited Understanding

The system and its integration potential remain shallow; undiscovered. Initial installation requirements are not received or properly met. The standard integration and set up is cobbled together to get the system installed as soon as possible. For example, the system (perhaps multiple identical systems in different areas) should connect to wi-fi for the data to flow to a central repository. Wi-fi is currently not available in this area so the team opts to house and maintain the data (on all of them) manually, thus, introducing high risk, high cost.  This short circuits the potential growth possibility and by-passes the built-in risk mitigation to meet the current demand instead of allowing for easy exponential growth. This also may not make for easy data stream changes down the line if needed.


2. Underusing Advanced Capabilities 

The system’s configuration is not fully dissected and understood. In many cases, the vendor is relied on to make the appropriate configurations. The problem with that is the vendor is limited in understanding the process and people intricacies to set up the optimal configurations. Or perhaps the company makes their own configurations without a full understanding of the options or of the impact of each setting. Both situations leave room for potential underuse of existing capabilities and optimization potential.



3. Tribal Knowledge

Tribal knowledge of initial set up is not properly documented which causes confusion downstream. This is a type of metadata (information only in people’s heads) that gets lost.  The “why” behind the myriad of options to exist within: classification, configuration, settings, locations, setup etc. It may be justified with the intention of changing in the future, but that knowledge may disappear with employees as they depart from the team or company. This is especially exasperated with hasty implementations that have experienced lots of unforeseen installation hiccups. Last minute problem solving by the Subject Matter Expert (SME) may never make it to proper documentation that outlines the initial plan.


4. Poor Data Governance 

Newer systems have great Integration capabilities to streamline and secure data collection. When workarounds are taken, data is compromised in security and collection. Instead of automated and smooth, data collection is manual and spasmodic. Lack of optimal connectivity introduces breaks (and thus breaches) in the data lifecycle via workarounds that attempt to fit the system into an “older” environment, which defeats the purpose. 


Additionally, the data lifecycle is not fully mapped out. Data streams and their classification are not fully understood, data Integrity may not be robustly tested (vendor’s word or white paper referenced), data security neglected (including the popular culprits: unclear roles and permissions, system security, data and audit trail oversight, data ownership, archiving, backup procedures and business continuity plans), as well as inadequate data storage locations or poor data quality. 


5. Delayed Documentation  

Often there are crucial aspects that are seen as non-critical and get pushed aside to get the system live with the promise of working on them at a later date. These are parts of data governance that are integral and oftentimes require testing and planning up front. Documentation and procedures that are unfairly deprioritized usually includes networking diagrams, incident management, archiving procedures, back-up procedures, asset and spare parts inventory and data reviews procedures.


These are the sins that inhibit your system to grow well with your organization. Next part of the series we explore ways to plan for scalability in your system alongside your organizational growth. 



The Neuroscience Behind a Successful Data Integrity Leader

12 November 2023


The brain is a complex and mysterious structure with its earliest studies dating back to Ancient Egypt. Neuroscience has numerous branches, but in this article, I focus on cognitive neuroscience, drawing from psychology and behavioral tendencies in humans. Scientists can now understand many cognitive attributes and how to enhance and build upon characteristics to develop ourselves and the way we interact with the world. I explore key characteristics of data integrity leaders and how they have pinpointed and built upon their strengths to guide organizations to be accountable for their ownership of the data lifecycle.


Data integrity leaders play a critical role in organizations, seamlessly aligning departments to a quality mindset. This requires attention to detail, comprehension of regulations and specifications, high emotional intelligence, and the ability to identify and implement changes across teams.


These individuals can self-assess, find weaknesses, and improve on themselves, which makes them much more likely to successfully translate the same continuous improvement mindset to an organization. They are bursting with curiosity towards roles, systems, business processes, and relationship dynamics, all while being acutely aware of how this is all entangled together to encompass an organization as a whole.

This skillset is in high demand, a rare combination of characteristics of learner, analyst, effective communicator, and achiever.


Trait 1: Learner

Effective data integrity leaders are self-disciplined and continually improving themselves and everything they touch, both in their work and personal lives. They have experienced the hard work required for self-improvement; they have experienced failure and learned to pivot and adjust to new information. They can scale this up to individuals, teams, and organizations.


They love the process of learning and are not intimidated by the new or unknown. They are perfectly comfortable being uncomfortable. They learn technology, procedures, regulations, personalities, team dynamics, and more. They have mastered the art of learning and retaining information.


Recommended resources: Andrew Huberman : Learning and Memory


Trait 2: Curiously Analytical

Curiosity may have killed the cat, but it is associated with greater satisfaction with life and psychological well-being. When your mind is expecting and primed to learn new ideas, you are more likely to actively recognize solutions than passively let them pass by.


Effective data integrity leaders are proactive, constantly analyzing data, trends, and root causes of problems. They are not satisfied with surface-level explanations. They prefer to roll up their sleeves and conduct their own investigations. This avoids the risk of reaching incorrect conclusions built on cognitive bias.


They are deliberate, detail-oriented, and continuously monitoring. They think critically and thoroughly while being aware of their own limitations. They slow down and know how to ask for help from experts as needed.


They utilize failure as a learning opportunity that helps them understand why something is not working. It fuels them to keep looking for solutions. Knowing the "why" something doesn't work is more valuable to them than having it work without any effort required.


Recommended resources: Thinking, Fast & Slow by Daniel Kahneman


Trait 3: High Emotional Intelligence

Emotions are an integral part of the human experience, shaping our perceptions, behavior, and decision-making. Effective data integrity leaders have high emotional intelligence, which means they are able to understand and manage their own emotions, as well as the emotions of others. This is essential for building organic relationships with stakeholders and influencing teams.


They have an inclination to empathize, understand requirements, and serve stakeholders in finding the best solution.


They have excellent written and oral communication skills since they must tailor their message to the audience accordingly. A system SME requires a different language than a VP.


This individual has learned to pause and assess situations before reacting, and has developed sharp emotional regulation skills. This self-awareness allows them to consciously choose their responses and work seamlessly with different types of people.


They have a win-win mentality to help leverage compliance and technology as the most efficient and resource-saving solution. They have empathy and great active listening skills, which helps them form trusted relationships that are more likely to work with them in achieving a goal instead of against them. They can frame and deliver solutions that are more likely to be welcomed than immediately rejected.


Recommended resources: The 5 Dysfunctions of a Team by by Patrick Lencioni ,How to Talk to Anyone by by Leil Lowndes


Trait 4: Achiever

What good is learning, analyzing, and inspiring teamwork if you can't drive the changes needed to actually make improvements?


Effective data integrity leaders are able to set and achieve goals, even in the face of adversity. They are also able to motivate and inspire others to work towards common goals.


They have mastered effective ways of implementing change and are able to follow through. They lead and guide their teams, who then do the same with other departments. They equip teams to take accountability for the data, processes, and systems they own.


Effective data integrity leaders understand that they cannot make all organizational, procedural, and systemic changes themselves. They recruit people and teams by educating, training, and empowering them to be stewards of their own data, processes, and systems.


They are able to prioritize high-risk business systems as well as pick the low-hanging fruit to meet goals. They can set priorities not just for themselves but for individuals, teams, and leaders.


Source: The 7 Habits of Highly Effective People by Stephen R. Covey


Conclusion

Effective data integrity leaders are a rare breed, but they are essential for organizations that want to succeed in protecting their data and maintaining their reputation. 


They are well-rounded and capitalize on their curiosity to develop their learning ability. They build trust and understand that soft skills are equally as important as hard skills. In these attributes, they aim to be proactive rather than reactive. Their plan is to be multiple moves ahead, to predict and mitigate. By understanding the cognitive neuroscience behind these leaders and developing the key characteristics, organizations can better identify and develop the right people for these critical roles.


Let's Talk Data: Static and Dynamic GxP Records

28 August 2023


In the realm of data integrity, understanding the intricacies of GxP  records is essential. When it comes to data, the distinction between static and dynamic data records holds significant implications, especially in terms of ALCOA++ principles. So, let's delve into these concepts to shed light on their significance.


Defining ALCOA++

At the core of data integrity lie the principles of ALCOA++:

These principles form the foundation of reliable and trustworthy data management which ultimately ensure the safety and quality of pharmaceutical products. But how do static and dynamic data records fit into this framework?


Static Data: The Set and Stable

Static data refers to data and metadata that are inherently fixed. This type of data yields a final result that doesn't require additional adjustments. Typically, static data demands less storage space as it’s fixed and predictable. This can include readings from instruments such as balances, gauges, and sensors.


Dynamic Data: The Interactive and Evolving

Dynamic data, on the other hand, requires interaction from the user to attain a final result. This data evolves through collaboration between the user and the content, often necessitating manual processing which is slightly subjective and can differ from user to user. This type of data typically demands more storage space and includes processes like chromatographic or lab assays that have a baseline but require further manipulation.


Why Does it Matter?

Safeguarding Your GXP Data

So, how should you approach these different types of data? Much like preserving treasured possessions, such as a cassette tape or a vintage photograph, a tailored approach is necessary:

Final Thoughts

As the data landscape evolves, embracing ALCOA++ principles and understanding the nuances of static and dynamic data records becomes non-negotiable. Compliance, data integrity, and effective management hinge on this understanding. It's time to assess, adapt, and ensure your data's reliability and compliance.


Happy Testing!

How To: Progressive Overload for Your Data Integrity Remediation

14 August 2023 


Progressive overload is an efficient and intuitive approach to weightlifting. The concept is simple: increase the intensity or difficulty, specifically by 10% or less each week, to maximize results while reducing the risk of injury or burnout. It's an instinctive idea that likely applies loosely to many of our goals. Some goals are simple, while others only appear that way. 


Here's what I mean by this.


As a New Year's resolution, I set a goal to complete 5 pull-ups. Wanting to expand beyond cardio, I saw this as a way to expand my physical skills. I assumed I could manage at least 1 pull-up and would need to build endurance to reach 5, confident I could even surpass that within a year. I promptly forgot about this until mid-June when I decided to start working towards it. After a cardio session, I aimed to do my first pull-up, with the goal of reaching two the following week. However, reality hit hard—I could barely dead hang for a minute due to poor grip strength. Looking back, a wiser person wouldn't have been surprised. I begrudgingly had to consult an old physiology textbook to understand the muscles involved and how to build their strength so I could achieve this goal.

Let's break down a few red flags in my thought process:

Now, are we making similar mistakes when evaluating and setting goals for our data integrity remediation? In theory, we might think we're applying the same approach to our DI remediation plans, but in practice, the incremental increase in "resistance" isn't accurately defined. So, how do we ensure a successful DI remediation?

All of this requires deliberate planning and disciplined execution. Data integrity is not a single muscle; it's a system woven into the organization, relying on specific members arranged to perform complex functions. It's an ongoing, evolving process that builds upon previous work.

In an era of instant gratification and quick results, it's crucial to pause, comprehend, and establish sustainable, realistic goals. The struggles of last week can transform into today's effortless reflexes when endurance is consistently developed. This isn't a one-time project; the health of your Data Integrity program mirrors an organization's commitment to quality.

Streamlining Your Validation Efforts Responsibly

15 July 2023

As we approach the one-year mark since the FDA released the CSA draft guidance, it's crucial to assess whether your validation efforts are effectively eliminating redundancy and burdensome processes on new and existing systems. The aim of this draft is to promote agility and flexibility in assurance activities while ensuring that systems remain in a validated state. If your policies and procedures have not yet been updated to reflect the updated guidance, it's advisable to consider doing so to reduce costs and embrace new technologies.

Emphasizing Risk-Based Approaches:

It's important to note that the focus should be on a risk-based, critical thinking approach rather than a fear-based, one-size-fits-all testing strategy. The draft provides detailed clarification on how to assess system risks and determine the appropriate level of rigorous testing. 

Define the Intended Use of the System:


Understand the Impact on Production or Quality Systems:


Plan Testing Rigor Based on Risk Ratings:


Capture Sufficient Evidence of Software Functionality:


Regularly Re-evaluate System Risk and Compliance:



The increasing adoption of automation, robotics, artificial intelligence, and other digital capabilities within the pharmaceutical and medical device sectors has significant implications for patient safety. This draft guidance aims to reduce associated risks, minimize error sources, and enhance overall quality and safety.


Conclusion:

By responsibly streamlining your validation efforts and adhering to the risk-based, critical thinking approach outlined in the CSA draft guidance, you can effectively reduce redundancy, lower costs, and embrace new technologies. This will ultimately contribute to improved quality and safety in the pharmaceutical and medical device industries.



Happy testing!

Unlocking the Power of Effective User Requirement Specifications 

20 June 2023

Have you ever come across certain requirements in your User Requirement Specifications (URS) that seemed vague or outdated? In this post, we will explore common pitfalls in URS documentation and discuss how to overcome them. By addressing these issues, you can ensure that your URS is clear, concise, and aligned with the specific system it pertains to. Moreover, we will delve into the importance of testability and provide actionable tips to enhance your URS for optimal results.

Guidances are Not Requirements:


Moving Beyond Generic Regulations:


Audit Trail Precision:


User Level Permissions:


Distinguishing Disaster Recovery and Backups:


Archiving Requirements:


System Maintenance Document:


Network Diagram Updates:


Data Definitions and Management:


Improving your User Requirement Specifications can significantly enhance the clarity and effectiveness of your documentation. By avoiding generic statements, precisely defining requirements, and incorporating specific testing criteria, you can create a more robust URS. A robust URS will translate to thorough testing which minimizes system issues post go-live. Don't settle for a subpar URS—take the opportunity to refine your requirements and achieve better outcomes for your projects.

Software Preventative Maintenance for Automated Systems

18 May 2023

What happens when a preventative maintenance program crosses with a periodic review of a software-based system? While I'm not sure what that Punnett square would look like, there are crucial points to consider incorporating into the periodic review of automated and computerized systems to catch issues preemptively and minimize downtime.

The frequency of these reviews should be pre-determined and based on the risk associated with each system and can be combined with existing periodic reviews. Each review should build upon the documentation from the previous one, and all stakeholders should be made aware and involved in addressing any discovered issues. Thorough documentation of the review ensures consistent and reliable information that can help predict resource requirements and avoid disruptions in production.

5 Reasons to Explore the Audit Trail Before Purchasing GxP Software

April 12, 2023

Although audit trails are often underutilized, they contain crucial information that can give you a preview of how compatible a system will be with current and planned processes. The role of an audit trail is to monitor compliance during system use, much like how log books and lab notebooks record critical raw data. However, we can learn to leverage it by asking the right questions to help focus on the products best suited for an organization.

Here are five ways to inspect the audit trail to give an unadulterated view of the system before committing to it:

1.  It must meet internal and industry requirements. The usefulness of an audit trail lies in what it captures: who, what, where, when, Electronic Records, Electronic Signatures (ALCOA++). Does it adequately capture the critical data needed in the current process? Does it capture obvious errors? Can it be easily filtered to isolate data? Is it user-friendly? Is it too friendly (can anything be removed from it)? These are things that are easy to spot quickly!

2.  Will the system realistically add to or reduce the resource load? Based on current or planned processes, will having this audit trail require more resources or bandwidth to make it work, or will it drastically reduce current resources' workloads by streamlining and simplifying reviews and investigations? Sometimes audit trails may not capture data deemed critical by users, which will require extra manual processes to capture this information. The reverse is also possible, where it may be too advanced for the intended use, capturing unnecessary or extraneous information that may fill up the storage area with irrelevant data. If so, it may require documented justification for omitting review and storage. Be realistic about current company processes and have a clear plan on how to implement any modifications needed for the team(s). For example, buying a Windows system might make more sense on paper, but if the users, site, interfacing systems, and processes are set up to use a Mac, it will be harder to implement successfully and cost more money in the long run.

3.  It will give insight into the vendor's customer service and industry knowledge. Reading through the manual, digging through the system, and asking questions will set clear expectations with the vendor from the start. They will assign their best to help you work through and find a way to integrate their software into existing processes. It will also help them understand current industry requirements and find ways to enhance the system to fit the current needs. It becomes a symbiotic relationship where they tweak their products to fit the current industry based on [clearly articulated] requirements, and consumers have better options [to fulfill said requirements]. If it's hard to get satisfying responses during this investigation phase, it should be considered when moving forward with a contract and when making project timelines.

4.  It may aid in providing adequate information on the system's security. Does it have the ability to show unscrupulous user actions such as deletions or modification of system settings? Does it hide sensitive information (oops, yes, I have seen this happen before!)? Does it easily capture all system modifications performed by all user levels? If it does not, are there processes in place to mitigate this? This is a great indicator of how many trainings, reviews, SOP writing, and forms will be required.

5. How does the lifecycle management of the audit trail fit with the current company procedures? How many teams/systems/processes are required to ensure that the audit trail is maintained securely? Does it remain on the system or can it be stored centrally and integrated with the current setup? The critical information in the audit trail needs to be retained based on internal and external requirements, so pre-planning for its storage requirements and capacities should be mapped with the company’s available resources. This should help clarify how easy (or hard and thus how many more resources are needed) the maintenance of the audit trail will be.

Keep in mind that the level of pre-investigation into a system’s audit trail should correlate with its risk. A high-risk, high-use, high-cost system would not be treated the same as a low-use, low-risk, low-cost system. A successful deployment of a new system depends on proper planning. These questions will help map the resources required (i.e. man-hours, networking/infrastructure, SOPs, training, and time) and ultimately planned costs. The more familiar you are with the system upfront, the less chance of roadblocks.



Data Integrity: Not Just IT's Problem

March 17, 2023


Data. It's coming from everywhere and there's an abundance of it - some perhaps seemingly extraneous. From cellphones, to social media apps, to smart home devices, there is always a digital footprint that is being produced by you, about you, and, indirectly, about others. 

What is Data Integrity and how does it apply to all of us that are in the pharmaceutical sector? Data integrity is the overall accuracy, completeness, and consistency of data (think ALCOA+ principles). Data integrity refers to the robustness of a product's data in regards to regulatory compliance and security. It is ensured by, but not limited to, a collection of processes, rules, and standards. 

When we think of raw data we typically think of entries in lab or log notebooks, signed print-outs, reports, batch records, etc. We must follow cGxPs; we cross our t's and define our 'EE's. Nowadays, most of the systems available to us are capable of more than spitting out print-outs as a one and done deal. Typically, there is an electronic data trail that keeps a log of who, when, what every time the system is used. That being said, not all audit trails are created equally (more on this another time) so we should be cautious! However, if this data is automatically capturing the same information that we are also manually recording, why are we doing the work twice? More importantly, which would we present to a regulatory agency if the need arises? Do we retain both or discard one? How sure are we that both electronic and paper logs would come up identical?  This all would be hard to explain as it seems suspicious, or at the very least, unorganized. Not to mention all the cost of resources needed to maintain all these duplicated Information.

Now that we've (very lightly) defined DI, let's get down to my bold claim: we would not assume that 'IT(/IS)' would be responsible for the raw data produced by all systems and instruments any more than they would be responsible for reviewing, organizing, maintaining all logbooks/notebooks, reports, batch records etc. The Business Owner, alongside the Subject Matter Expert, would be responsible for understanding the process and the raw data required to be collected and maintained. IT/IS would assist with the technical aspects (think network design, data storage, security, etc.) alongside engineering, maintenance, document control and quality all dispensing their requirements and experience to the process. Teamwork is essential; each group possesses unique skill sets and contribute to the data lifecycle to continuously evaluate new and improve current processes and systems. Simply put, if you own a part of the process, your input is required. 

The automatic generation of data is a feature that is meant to put us at ease, not chase us directly into the cumbersome arms of paper and repetitive processes. When we take the time to plan and understand our data, we intuitively build Data Integrity into our system and processes, we all can work more efficiently, existing system use is optimized, data is accounted for and understood, resources unburdened, regulatory agencies are at ease (dare I say maybe even impressed) with our processes which ultimately ensure patient safety and drug quality. 

This is rarely a one time effort, rather it requires a continuous vigilance and shift of mindset.  There is quite a bit of up front cost involved (multiple meetings, uncomfortable work, more resources, process changes) however, it is necessary. The way of efficiency changes along the evolving regulations, technology, products and processes. We are accountable for the data we produce and must equip ourselves to not only defend but also optimize every last keystroke.