WooHoo! Congratulations to all of the CDIC R&D (including yours truly) and Quality teams for getting this done. Great job everyone!
WooHoo! Congratulations to all of the CDIC R&D (including yours truly) and Quality teams for getting this done. Great job everyone!
Last night I attended a San Diego .NET Users Group meeting where the topic was LINQ. The presentation was done by Julie Lerman (http://www.thedatafarm.com/blog and http://blogs.devsource.com/devlife).
Since I have an interest in ORM (see here) I’ve done some reading on LINQ in the past. It’s always amazing to me how much more you seem to learn from a presentation. Especially when the speaker is well organized and knowledgeable and provides an engaging delivery. Great job Julie!
LINQ is very impressive. The new .NET Framework Orcas (VS 2008) language features include:
These not only provide a foundation for ORM work, but are also powerful .NET language and tool additions for just about any programming task.
This is a follow-up to the Developing a real-time data flow and control model with WCF post. My original plan was to write a full-fledged article on this. I’ve gotten some requests for the code, but it does not appear that I’m going to have time to complete the article in the near future. So I thought I’d just give a brief description here of what I’ve done so far and provide the code as is.
Please Note: The description provided is very brief and only meant as an overview. None of the implementation details are included here. It’s not a very complicated project. If you have some VS2005 development experience and are willing to dig into the code, you shouldn’t have a problem figuring it all out. Working through this code would provide a good first tutorial on developing with WCF. Some set-up is required (service installation) and is described below.
I originally conceived of this project because of some questions I’d heard from real-time C/C++ developers. They wanted to know about migrating from MFC/VC++ to .NET managed code. The primary concern was about the use of legacy device level code and how to manage future mixed development projects.
So my first thought was to demonstrate how
easy straight forward it is to incorporate COM components into managed code with .NET wrappers. There are already many good articles on integrating old Win32 projects into .NET. e.g. Do you COM? Dealing with Legacy Projects. This project is a concrete example of how that can be done.
It also illustrates a model of the type of real-time data streaming and control typically required by a physiological monitor.
To extend that model, I wanted to show how WCF could be used as a network transport for that same data stream. Hence the previous post. The addition of a WCF client application that provided a real-time display of the data stream was only logical.
There are a number of directions that I had planned on taking this project, but that will have to wait for another day. I’m sure that you’ll come up with your own ideas along with numerous improvements.
The download (below) is a Visual Studio 2005 solution, called RealTimeTemplate, with 6 projects (one is C++, all the rest are C#). Here is a diagram of the projects and their relationship. The horizontal arrows show the data flow as described above.
The projects are:
Here is what the Windows Form looks like when the application is running.
In order to run the sine wave display application, you’ll first have to install and start the SineWaveWCFService.
InstallSineWaveWCFService.batin SineWaveWCFService\bin\Debug. The service can be un-installed with the
The source code can be downloaded here:
The HealthLeaders Technology article primarily focuses on the challenges of interfacing medical devices in the hospital environment. I think the situation in a physician’s private practice or small group office is even worse.
The needs for medical device connectivity in both environments are essentially the same:
Hospitals have the advantage of working with big EMR vendors who in turn can provide connectivity solutions for a larger variety of medical devices. Also, large hospital chains have enough clout to be able to mandate performance criteria from their vendors that include interoperability.
The typical physician’s office uses a smaller EMR provider (or even worse, a ‘home brew’ system) where in many cases connectivity with external devices is an afterthought, if it exists at all. Even when you work with mid-sized EMR companies, each has their own proprietary external data interface. Very few of these smaller stand-alone EMR management systems provide standard interfaces (e.g. HL7) for external device data capture.
The interoperability problem is not a technical one. The issue is the time and resources it takes to implement and validate a given medical device interface. The real hope of a MD PnP-like solution is that the cost of that interface can be significantly reduced.
So, here’s my perspective. As a medical device manufacturer, when we take our device into a private practice physician that has an existing (or planned) EMR system the first requirement is pretty much always the same. It’s simply that the diagnostic results from our device automatically appear in a patient’s record in their EMR system. To make this happen, they have to choose between three possibilities:
The problem with #1 is that we don’t have the resources to build each unique interface required to satisfy all of our customers. Plus that, our business is building medical devices, not EMR solutions.
#2 might not work out because unless the EMR company has a lot of customers with our devices they will either charge a large custom engineering fee or may just say they won’t do it at all. The third option is doable, but is also potentially costly.
Notice that all of the choices require additional investment by the physician. I wonder if these types of issues may be one of the contributing factors for the low adoption rate of EMR for office-based physicians.
Not being able to provide cost-effective EMR integration is bad for everyone involved. It’s bad for a medical device manufacturer (like us) because it makes it that much harder to sell systems. It’s bad for the physician’s office because without EMR integration they’ll end up with a less effective paper-based solution. It’s also bad for the EMR companies because they won’t be able to take advantage of the future opportunities that a fully integrated medical office would provide.
The reasons may be different, but my conclusion about EMR connectivity with medical devices is the same as Tim’s: “It’s a mess.”
UPDATE (7/21/08): Ran across this MIT Technology Review post about the MD PnP program:
“Plug and Play” Hospitals (Medical devices that exchange data could make hospitals safer).
Following my EMR-Facebook brainstorming post I ran across the IndivoHealth project (via WSJ-Health Blog). The announcement is that a consortium of large companies, Dossia, would be extending the Indivo open source core. Indivo has implemented the paradigm shift that I discussed.
The Indivo system is essentially an inversion of the current approach to medical records, in that the record resides with the patients and the patients grant permissions to institutions, clinicians, researchers, and other users of medical information. Indivo is a distributed, web-based, personally controlled electronic medical record system that is ubiquitously accessible to the nomadic user, built to public standards, and available under an open-source license.
Very cool. I guess I wasn’t the first to think of this! 🙂
This is a follow-up to the Kernel Object Namespace and Vista post. Those previous findings were made using an administrative user in Vista. When I tried creating a ‘Session\AppName’ Mutex as a non-administrative user though, the application hung!
Just to be clear, here is how (simplified) I’m creating the Mutex:
string MutexName = @"Session\AppName";
MutexSecurity mSec = new MutexSecurity();
MutexAccessRule rule = new MutexAccessRule(
new SecurityIdentifier(WellKnownSidType.WorldSid, null),
Mutex m = new Mutex(false, mutexName, out mutexWasCreated, mSec);
The hang occurs when the Mutex is created. By hang I mean that the process just spins its wheels sucking 50-60% of the CPU and will continue until it’s killed. Based on WinDbg analysis it’s either stuck in the underlaying Win32
CreateMutex() call or
CreateMutex() is being called repeatedly. It’s probably the later.
When ‘Local\’ or ‘Global\’ are used, the Mutex is created fine! As noted before, ‘Local\’ doesn’t work for other reasons so I’m stuck using the ‘Global\’ namespace. Go figure?
There’s an article in the October The Atlantic Monthly entitled About Facebook (subscription required) by Michael Hirschorn. His contention is that Facebook is currently the site that “comes closest to fulfilling the promise of social media.” As I read though the description of what that means — the way you qualify friends, the ability to track others and their interaction with others, and the restrictions you can put in place on what others can see about you, the groups you join, etc. — it made me think of the implementation of EMR systems. The primary components that Facebook has tackled are work flow and interoperability (I’ve touched a little on this before).
Maybe the Facebook model could point the way to better EMR solutions. As I started to look around I found that others are thinking the same way.
The first question is how are the requirements for an EMR system solved though the functionality provided by a social network?
As the article points out:
In Facebook’s vision of the Web, you, the user, are in control of your persona.
The same should be said for your personal health information.
In addition to providing work flow restrictions, Facebook also allows developers to create custom applications though the use of the Facebook Platform. By doing so, it has created a well-defined sandbox in which to create user defined content. MySpace will also be following suite in this regard.
It’s the “walled garden” that opens the door to interoperability. This strategy is considered flawed by some (quoted in the article), but is perfect for EMR purposes. Within the confines of these APIs any medical record content provider would be able to share their data inside the sandbox. The real value is the content of the data, not the mechanism that allows access into the environment.
I know there are many other issues that need to be dealt with when considering EMR functionality. However, when thinking about the popularity and ease-of-use of these social networking sites it’s hard not to see them as a possible model for improving health information flow.
I spent quite a few years developing diagnostic Electroencephalography (EEG) systems and software. I always get a kick out of articles with titles like this one: Microsoft Working On Mind-Reading Software. It’s the mind-reading part that gets me because your first impression is that Microsoft is developing technology that will allow it to somehow detect what you’re thinking. This, of course, will not be happening in the near or foreseeable future.
The work that Microsoft Research is doing in this area (see here) is fundamental research on the Human-Computer Interface (HCI). The Using a Low-Cost EEG for Task Classification in HCI Research article uses standard frequency domain EEG features (delta, theta, alpha, etc.) as classifiers in a Bayesian Network for differentiating three mental tasks. What was interesting to me was that they recognized the limitations of using EEG technology alone as a human-computer interface. The understanding and use of other physiological data (e.g. motor activity) along with EEG will have to be explored as a way to improve task detection.
Not only is this type of work important for meeting the needs of the physically disabled, as the Wii and Surface have shown, innovative HCI systems can have a dramatic affect on how we all interact with computers.
‘Thought-reading’ system controls wheelchair and synthesizes speech is another one. The system processes larynx nerve signals for speech synthesis and wheelchair control. The technology looks very cool and has the potential to improve the lives of handicapped individuals. I suppose you could consider motor neuron activity as the output of thought, but ‘thought-reading’ just feels like a misnomer. Maybe it’s just me.
Another ‘mind-reading’ technique is the use of Evoked Potentials (EP). One that got a lot of press is a few years back was Brain Fingerprinting (also see here). I’m sure there’s still on-going research in the P300 area, but nothing has grabbed much attention since.
Also, checkout Computers can read your mind. Amazing!
I found some companies that appear to be trying to use EEG processing algorithms for HCI. Both are focused on the gaming industry. They provide no details on how their products work, so it’s hard not to be skeptical about their functionality claims.
Here’s another interesting technology: Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive, lightweight imaging tool which can measure blood oxygenation levels in the brain. Check out the article here.
Here’s the Microsoft patent application: Using electroencephalograph signals for task classification and activity recognition (via here).
Check out Brain2Robot Project which uses EEG signal processing (my highlighting):
Highly efficient algorithms analyze these signals using a self-learning technique. The software is capable of detecting changes in brain activity that take place even before a movement is carried out. It can recognize and distinguish between the patterns of signals that correspond to an intention to raise the left or right hand, and extract them from the pulses being fired by millions of other neurons in the brain. These neural signal patterns are then converted into control instructions for the computer.
If they can do this reliably, that’s quite an accomplishment.
I get a lot of grief about my coffee drinking habits. Apparently, it is quite noticeable that I consume large quantities of coffee. People inevitably ask me what would happen if I didn’t get my daily fix. Would I have severe headaches? Would I go psychotic? Would I pass out at my desk? I suppose any of those things could happen, but I’m not about to find out, and here’s why:
So there! 🙂