Archive for the 'FDA' Category

Software Doesn’t Have An MD

Core O.S., core, Photo: Alex Washburn / WIREDI got a kick out of this Andreessen Horowitz piece: Digital Health/SOFTWARE DOESN’T HAVE AN MD.

I’m sure ‘the kid in the garage without a degree’ is no dummy, but this premise:

And so that large percentage of medicine that is effectively being practiced by non-MDs is going to expand.

is simply ludicrous.

There’s a big difference between creating health and wellness appliances and mobile applications and diagnosing and treating patients. The distinction is outlined in FDA clarifies the line between wellness and regulated medical devices.  If you claim your product acts like a doctor (treat or diagnose) or doesn’t fall into the “low risk” category, then your company will have to follow FDA regulatory controls.

FDASIA Health IT Report

health-it-catagoriesThe Food and Drug Administration Safety and Innovation Act (FDASIA) required the FDA to develop:

a report that contains a proposed strategy and recommendations on an appropriate, risk-based regulatory framework pertaining to health information technology, including mobile medical applications, that promotes innovation, protects patient safety, and avoids regulatory duplication.

Here’s the report: FDASIA Health IT Report (warning: PDF).

It looks like EMR/EHR vendors (administrative and health management functionality) don’t have to worry about FDA regulatory oversight. The medical device category (of course) does:

FDA would focus its oversight on medical device functionality because, in general, these functions, such as computer aided detection software and remote display or notification of real-time alarms from bedside monitors, present greater risks to patient safety than health IT with administrative or health management functionality.

 

FDA Regulation of Mobile Medical Apps

fda-logoThe FDA has issued their final guidance on mobile medical applications: Keeping Up with Progress in Mobile Medical Apps. The guidance document (PDF) will “give mobile app creators a clear and predictable roadmap to help them determine whether or not their products will be the focus of FDA’s oversight. ”

The regulatory approach is as you would expect (my highlight):

FDA intends to apply its regulatory oversight to only those mobile apps that are medical devices and whose functionality could pose a risk to a patient’s safety if the mobile app were to not function as intended.

There are six categories of mobile applications listed that the FDA intends to exercise enforcement discretion:

  1. Help patients/users self-manage their disease or condition without providing specific treatment suggestions;
  2. Provide patients with simple tools to organize and track their health information;
  3. Provide easy access to information related to health conditions or treatments;
  4. Help patients document, show or communicate potential medical conditions to health care providers;
  5. Automate simple tasks for health care providers; or
  6. Enable patients or providers to interact with Personal Health Records (PHR) or Electronic Health Record (EHR) systems.

If a mobile application is considered a medical device it will be classified as such — class I (general controls), class II (special controls in addition to general controls), or class III (premarket approval) — and the manufacturer will be required to follow Quality System regulations (which includes good manufacturing practices, §820.30) in the design and development of that application.

For any organization that is not already under FDA regulatory control, this is a big deal. Given that there are 1000’s of medical applications already out there, even this limited scope approach will likely affect many companies. More information is here: Mobile Medical Applications.

The guidance includes many examples (including mobile apps that are not medical devices) and an FAQ.

Also see:

 

FDA Recognition of Medical Device Standards for Interoperability

This is a follow-up to Interoperability: Arrested Progress.

The FDA has recognized voluntary interoperability standards for medical devices: Improving Patient Care by Making Sure Devices Work Well Together.

The FDA and HHS has (my highlight):

published a list of recognized standards for interoperability intended to assist manufacturers who elect to declare conformity with consensus standards to meet certain requirements for medical devices.

The standards are searchable here: Recognized Consensus Standards. The Software/Informatics area currently lists 54 items.

Some consider this a “landmark announcement” (see here) but “voluntary” and “elect to declare” just seem like more of #1 (same old, same old) to me.

UPDATE (9/4/2013): More here: FDA Updates List of Recognized Standards, Confusion Ensues

Interoperability: Arrested Progress

When it comes to the state of interoperability in the medical device industry there couldn’t be a better metaphor than Arrested Development*.  A dysfunctional family made up of eccentric well-meaning personalities each doing their own thing, oblivious to each other and the rest of the world.

Healthcare Un-Interoperability has long been one of my favorite subjects. Let’s review where things are these days.

The article Health IT Interoperability: How Far Along Are We? provides a nice summary of the current state of HIT interoperability. This is particularly important because:

hospitals using basic EHR systems tripled from 12.2 percent in 2009 to 44.4 percent in 2012

For better or worse, it’s the monetary incentives of the Affordable Care Act that push doctors to electronic medical records and is the primary reason for the accelerated rate of EHR adoption. The goal of having more electronic health records is to improve the quality of patient care. Reduction of medication-related errors is a great example: Lack of Interoperability has Ownership for Medication ErrorsThe rapid uptake of these systems can also present problems. For example, in Emergency Departments: EHR systems pose serious concerns, report says.

Nevertheless, it’s clear that electronic medical records is the future in healthcare data management. The down-side of this growth from an interoperability point of view is that there are that many more systems out there that don’t talk to each other!

Initiatives like CommonWell and Healtheway are moving in the right direction but are just getting off the ground. Also, these types of efforts are often far removed from the medical device industry and have little impact on software development and interface decision making.

Let’s step back and look at the HIMSS definition of Interoperability:

  1. Foundational: Data exchange with no interpretation.
  2. Structural:  Allows identification of data structure (fields) but not necessarily data content.
  3. Semantic: Data structuring and codification (interpretation) of the data.

For all practical purposes only #3 (semantic) has value when in comes to the exchange of data with a medical device. As noted in Interoperability: Not a non-issue:

Semantic interoperability continues to be a major challenge and, if not addressed, will have a serious impact on the quality of care.

The same point is made here: Interoperability vs Health Information Exchange: Setting the Record Straight.  Just because you can send it (exchange) doesn’t mean the recipient can understand it (interoperability).

One area that is always part of this discussion are standards. It’s unfortunate that due to technical and (mostly) non-technical reasons the following is often true:

standards-vs-interoperability

Even more disheartening is when you read that a standards-based organization (OpenStack in this case) can’t necessarily make interoperability magically happen:

… success depends upon a single large vendor assuming leadership. Interoperability will follow, but it won’t be designed by committee.

The distress of a well-focused cloud computing API involving a hand-full of vendors makes the outlook for HIT interoperability look particularly bleak.  To make matters worse, the use of OSS in FDA regulated products face additional challenges that are not even seen in most other industries (see Open Source Medical Device Connectivity)

This is all good news for businesses that provide products and services that fill the connectivity gap. Companies like CapsuleiSirona, and Nuvon are many times the only effective way to provide an integration solution to a large number of customers.

I should note that there are some bright spots in the interoperability landscape. For example, the Continua Health Alliance has successfully pulled together over 200 companies to create a vision for inter-operable personal connected health solutions.  Also, the West Health Institute is building standardized communication protocols into their embedded software for medical devices. These and numerous other successes provide hope, but are still just the tip of a very large iceberg.

Dr. Julian Goldman sums up the current situation in Medical Device Interoperability: A ‘Wicked Problem’ of Our Time:

Our years of work on medical device interoperability have led us to see the barriers (including technical, business, liability, standards, and regulatory factors) as “wicked problems,” in which there is little agreement about “the problem,” no agreement on a solution, and problem solving is complex because of external constraints.

Others (Is HIT interoperability in the nature of healthcare?) see the proprietary business model of major HIT companies as the primary culprit.

So what are some possible scenarios for the future?

  1. Same old, same old.  This is essentially Einstein’s definition of insanity: doing the same thing over and over again and expecting different results.
  2. Federally enforced standards and regulations.  Dr. Goldman’s suggestion to require manufacturers to fully disclosure their communication interfaces?   Given the current anti-regulatory environment and budget restrictions, this seems unlikely to happen.
  3. A hegemon, like OpenStack above?  The healthcare industry is too diverse and complex. There is no single player that could even begin to tip the scales.

At least for the foreseeable future it looks like #1 (insanity) is going to prevail. If I’m missing some huge game-changer, please let me know.

In the mean time, let another episode begin!

                                             
*No deep meaning here. Certainly not like Arrested Criticism.  I’m just comparing the medical device industry to a bunch of fictional crazy people.

Medical Device Innovation Consortium

The FDA has announced the Medical Device Innovation Consortium (MDIC)  which aims to help medical device companies get their products to market faster. See FDA, Private Groups Team Up to Speed Device Approval.

The term Regulatory Science is used 12 times on the single page MDIC Web site so it must be important.  The FDA has been using regulator science in other health related areas since early 2010: see Advancing Regulatory Science.

This consortium is part of a much broader strategy (see the strategic plan) to improve both innovation and safety in  FDA-regulated products. The MDIC site talks about subcommittees and projects but it’s unclear what specific medical device topics will be addressed.  It will be interesting to follow their growth and progress.

When Open-source can Kill or Cure

The use of open-source software in medical devices has been a topic of discussion for many years. The Economist article Open-source medical devices: When code can kill or cure highlights continuing activity in the academic community and interest by the FDA in developing processes around its use.

One of the big unknowns in this area is how a community might be formed (Dreaming of Flexible, Simple, Sloppy, Tolerant in Healthcare IT  has some thoughts on this).

From the article:

Eventually, medical devices might evolve into collections of specialised (and possibly proprietary) accessories, with the primary computing and safety features managed by an open-source hub.

This is in reference to both hardware and software, but in either case one major challenge will be how to incentivize contributions.  Open-source means free to use and modify. If there is no financial gain to be had, other benefits for contributing need to be developed.  Also, proprietary and open-source in the same sentence seems like an oxymoron, so I’m not sure how that’s going to work.

Another barrier would be liability risk. Let’s say you contributed software to this hub and that component ended up in a device that harmed or killed someone.  All of the legal waivers, disclaimers, and releases in world wouldn’t necessarily keep you out of a court room.

I don’t think the basic arguments discussed in Open Source Medical Device Connectivity have changed a great deal.  At the end of the day the medical device manufacturer will ultimately be responsible for ensuring the safety and efficacy of their device.

OTS/SOUP Software Validation Strategies

My last discussion of Off-The-Shelf software validation only considered the high-level regulatory requirements.  What I want to do now is dig deeper into the strategies for answering question #5:

How do you know it works?

This is the tough one. The other questions are important, but relative to #5, answering them is pretty easy.  How to answer this question (i.e. accomplish this validation) is the source of a lot of confusion.

There are many business and technical considerations that go into the decision to use OTS or SOUP software as part of a medical device. Articles and books are available that include guidance and general OTS validation approaches. e.g. Off-the-Shelf Software: A Broader Picture (warning PDF) is very informative in this regard:

  • Define business’ use of the system, ideally including use cases and explicit clarification of in-scope and out-of-scope functionality
  • Determine validation deliverables set based on system type, system risk, project scope, and degree of system modification
  • Review existing vendor system and validation documentation
  • Devise strategy for validation that leverages vendor documentation/systems as applicable
  • Create applicable system requirements specification and design documentation
  • Generate requirements-traceable validation protocol and execute validation
  • Put in place system use, administration, and maintenance procedures to ensure the system is used as intended and remains in a validated state

This is great stuff, but unfortunately it does not help you answer question #5 for a particular type of software. That’s what I want to try to do here.

OTS really implies Commercial off-the-shelf (COTS) software. The “commercial” component is important because it presumes that the software in question is a purchased product (typically in a “shrink-wrapped” package) that is designed, developed, and supported by a real company.  You can presumably find out what design controls and quality systems are in place for the production of their software and incorporate these findings into your own OTS validation.  If not, then the product is essentially SOUP (keep reading).

Contrast OTS with Software of Unknown Provenance (SOUP).  It is very unlikely that you can determine how this software was developed, so it’s up to you to validate that it does what it’s supposed to do.  In some instances this may be legacy custom software, but these days it probably means the integration of an open source program or library into your product.

This following list is by no means complete. It is only meant to provide some typical software categories and the strategies used for validating them.  Some notes:

  • I’ve included a Hazard Analysis section in each category because the amount of validation necessary is dependent on the level of concern.
  • The example requirements are not comprehensive. I just wanted to give you a flavor for what is expected.
  • Always remember, requirements must be testable.  The test protocol has to include a pass/fail criteria for each requirement. This is QA 101, but is often forgotten.
  • I have not included any example test protocol steps or reports.  If you’re going to continue reading, you probably don’t need help in that area.

Operating Systems

Examples:

  • Windows XP SP3
  • Windows 7 32-bit and 64-bit
  • Red Hat Linux

Approach:

  1. Hazard Analysis: Do a full  assessment of the risks associated with each OS.
    • Pay particular attention to the hazards associated with device and device driver interactions.
    • List all hazard mitigations.
    • Provide a residual Level of Concern (LOC) assessment after mitigation — hopefully this will be negligible.
    • If the residual LOC is major, then Special Documentation can still be provided to justify its use.
  2. Use your full product verification as proof that the OS meets the OTS requirements. This has validity since your product will probably only be using a small subset of the full capabilities of the OS.  All of the other functionality that the OS provides would be out of scope for your product.
  3. This means that a complete re-validation of your product is required for any OS updates.
  4. There is no test protocol or report with this approach. The OS is considered validated when the product verification has been successfully completed.

Compilers

Examples:

  • Visual Studio .NET 2010  (C# or C++)
Approach:
  1. Hazard Analysis:
    • For a vast majority of cases, I think it is safe to say that a compiler does not directly affect the functioning of the software or the integrity of the data.  What a program does (or doesn’t do) depends on the source code, not on the compiled version of that code.
    • The compiler is also not responsible for faults that may occur in devices it controls. The application just needs to be written so that it handles these conditions properly.
    • For some embedded applications that use specialized hardware and an associated compiler, the above will not necessarily be true. All functionality of the compiler must be validated in these cases.
  2. For widely used compilers (like Microsoft products) full product verification can be used as proof of the OTS requirements.
  3. Validation of a new compiler version , e.g. upgrading from VS 2008 to VS 2010: Showing that the same code base compiles and all Unit Tests pass in both can be used as proof. This assumes of course that the old version was previously validated.
  4. The compiler is considered fit for use after the product verification has passed so there is also no test protocol or report in this case.

Integrated Libraries

Examples:

Approach:
  1. Hazard Analysis: Both of these open source libraries are integrated into the product software.  The impact on product functioning, in particular data integrity, must be fully assessed.
  2. You first list the requirements that you will be using. For example, typical logging functionality that might include:
    • The logging system shall be able to post an entry labeled as INFO in a text file .
    • The logging system shall be able to post an entry labeled as INFO in a LEVEL column of a SQL Server database.
    • … same for ERROR, DEBUG, WARN, etc.
    • The logging system shall include time/date and other process information formatted as “YYYY-MM-DD HH:MM:SS…” for each log entry.
    • The logging system shall be able to log exceptions at all log levels, and include full stack traces.
  3. For database functionality, listing basic CRUD requirements plus other specialized needs can be done in the same way.
  4. I have found that the easiest way to test these kinds of requirements is to simply write unit tests that prove the library performs the desired functionality.  The unit tests are essentially the protocol and a report showing that all asserts have passed is a great artifact.

Version Control Systems

Examples:

Approach:
  1. Hazard Analysis: These are configuration management tools and are not part of the product. As such, the level of concern is generally low.
  2. As above,  you first list the specific functionality that you expect the VCS to perform. Here are some examples of the types of requirements that need to be tested:
    • The product shall be able to add a directory to a repository.
    • The product shall be able to add a file to a repository.
    • The product shall be able to update a file in a repository.
    • The product shall be able to retrieve the latest revision of files and directories.
    • The product shall be able to  branch a revision of files and directories.
    • The product shall be able to merge branched files and directories.
  3. You then write a protocol that tests each one. This would include detailed instructions on how to perform these operations along with the pass/fail criteria for each requirement.

Issue Tracking Tools

Examples:

Approach:
  1. Hazard Analysis: These tools are used for the management of the development project. Again, the level of concern is generally low.
  2. You only need to validate the functionality you intend to use.  The features that you don’t use do not need to be tested.
  3. You simply need to test the specific functionality.  Some example requirements — the roles, naming conventions, and workflow will of course depend on your organization and the tool being used:
    • A User shall be able to create a new issue.
    • A User shall be able to comment on an issue.
    • A Project Manager shall be able to assign an issue to a Developer.
    • A Developer shall be able change the state of an issue to ‘ready for test’.
    • A Tester shall be able to change the state of an issue to ‘verified’.
    • The tool shall be able to send e-mail notifications when an issue has been modified.
    • An Administrator shall be able to define a milestone.
  4. A protocol with detailed instructions and pass/fail criteria is executed and reported on.

Validation is a lot of work but is necessary to ensure that all of the tools and components used in the development of medical device software meet their intended functionality.

Building Safety into Medical Device Software

The article Build and Validate Safety in Medical Device Software takes a critical look at the current processes for medical device software and concludes:

The complexity of the software employed in many medical devices has rendered inadequate traditional methods (testing) for demonstrating their safety.

The article then provides examples of the types of analyses that can be performed to better ensure safety.

Interesting read.

Here are some references:

BohrBug: Not necessarily easy to find, but once discovered is reproducible.

Heisenbug: The ever-annoying bug that can not be reliably reproduced.

Spin: An open-source software tool for formal verification of distributed software systems.

Validation of Off-The-Shelf Software Development Tools

A reader asked me about OTS software tool validation. He says:

It seems to me that the editor and any other tool used to create the software is exactly that, a productivity tool. The end result (compiled binary installed on a validated PC configuration) is still going to go through verification and validation, therefore, it seems validating any of the items used to actually create the binary is unnecessary.

Any thoughts or guidance to help me understand this process?

This is a great question and the source of a lot of confusion.

The bottom line is that all third party tools (and libraries) used to construct or test FDA regulated software need to be validated.

You may think validating a compiler is unnecessary, but the FDA says otherwise — section 6.3 of the FDA Guidance on General Principles of Software Validation discussion includes “off-the-shelf software development tools, such as software compilers, linkers, editors, and operating systems.”

The form of the required documentation is detailed in the Off-The-Shelf Software Use in Medical Devices guidance document.  Section 2.1 has the questions that the OTS software BASIC DOCUMENTATION needs to answer:

  1. What is it?
  2. What are the Computer System Specifications for the OTS Software?
  3. How will you assure appropriate actions are taken by the End User?
  4. What does the OTS Software do?
  5. How do you know it works?
  6. How will you keep track of (control) the OTS Software?
For most products (again, OTS tools and libraries, including open source products) this documentation is not as onerous as you might think.  #5 is where you apply the intended use validation to the specific product. I have done this for many products: Visual Studio,  Subversion/TortoiseSVN, NUnit, Log4Net, etc.  You also need to validate custom developed testing tools and fixtures.
Like it or not, this is the reality of developing FDA regulated software.
Subscribe

Categories

Twitter Updates