Monthly Archive for June, 2008

One Reason Why Linux Isn’t Mainstream: ./configure and make

Bare with me, I’ll get to the point of the title by the end of the post.

I primarily develop for Microsoft platform targets, so I have a lot of familiarity with Microsoft development tools and operating systems. I also work with Linux-based systems, but mostly on the systems administration side: maintaining servers for R&D tools like Trac and Subversion.

I recently had some interest in trying to use Mono running on Linux as .NET development platform.

This also allowed me to try Microsoft Virtual PC 2007 (SP1) on XP-SP3. I went to a local .NET Developer’s Group (here) meeting a couple of weeks ago on Virtual PC technology. Being a Microsoft group most of the discussion was on running Microsoft OS’s, but I saw the potential for using VPC running Linux for cross-platform development. My PC is an older Pentium D dual core without virtualization support, but it has 3Gig of RAM and plenty of hard disk space, so I thought I’d give it a try.

Download and installation of Ubuntu 8.04 (Hardy Heron) LTS Desktop on VPC-2007 is a little quirky, but there are many blog posts that detail what it takes to create a stable system: e.g. Installing Ubuntu 8.04 under Microsoft Virtual PC 2007. Other system optimizations and fixes are easily found, particularly on the Ubuntu Forums.

OK, so now I have a fresh Linux installation and my goal is to install a Mono development environment. I started off by following the instructions in the Ubuntu section from the Mono Other Downloads page. The base Ubuntu Mono installation does not include development tools. From some reading I found that I also had to install the compilers:

# apt-get install mono-mcs
# apt-get install mono-gmcs

So now I move on to MonoDevelop. Here’s what the download page looks like:

Monodevelop Download

Here’s my first gripe: Why do I have to download and install four other dependencies (not including the Mono compiler dependency that’s not even mentioned here)?

Second gripe: All of the packages require unpacking, going to a shell prompt, changing to the unpack directory, and running the dreaded:

./configure
make

Also notice the line: “Compiling the following order will yield the most favorable response.” What does that mean?

So I download Mono.Addins 0.3, unpack it, and run ./configure. Here’s what I get:

configure: error: No al tool found. You need to install either the mono or .Net SDK.

This is as far as I’ve gotten. I’m sure there’s a solution for this. I know I either forgot to install something or made a stupid error somewhere along the way. Until I spend the time searching through forums and blogs to figure it out, I’m dead in the water.

I’m not trying to single out the Mono project here. If you’ve even tried to install a Unix application or library you inevitably end up in dependency hell — having to install a series of other packages that in turn require some other dependency to be installed.

So, to the point of this post: There’s a lot of talk about why Linux, which is free, isn’t more widely adopted on the desktop. Ubuntu is a great product — the UI is intuitive, system administration isn’t any worse than Windows, and all the productivity tools you need are available.

In my opinion, one reason is ./configure and make. If the open source community wants more developers for creating innovative software applications that will attract a wider user base, these have to go. I’m sure that the experience I’ve described here has turned away many developers.

Microsoft has their problems, but they have the distinct advantage of being able to provide a complete set of proprietary software along with excellent development tools (Visual Studio with ReSharper is hard to beat). Install them, and you’re creating and deploying applications instantly.

The first step to improving Linux adoption has to be making it easier for developers to simply get started.

Connecting Computers to FDA Regulated Medical Devices

Pete Gordon asked a couple of questions regarding FDA regulations for Internet-based reporting software that interface with medical devices. The questions are essentially:

  1. How much documentation (SRS, SDS, Test Plan) is required and at what stage can you provide the documentation?
  2. How does the FDA view SaaS architectures?

The type of software you’re talking about has no real FDA regulatory over site. The FDA has recently proposed new rules for connectivity software. I’ve commented on the MDDS rules, but Tim has a complete overview here: FDA Issues New MDDS Rule. As Tim notes, if the FDA puts the MDDS rules into place and becomes more aggressive about regulation, many software vendors that provide medical devices interfaces will be required to submit 510(k) premarket approvals.

Dealing with the safety and effectiveness of medical devices in complex networked environments is on the horizon. IEC 80001 (and here) is a proposed process for applying risk management to enterprise networks incorporating medical devices.  My mantra: High quality software and well tested systems will always be the best way to mitigate risk.

Until something changes, the answer to question #1 is that if your software is not a medical device, you don’t need to even deal with the FDA. The answer to question #2 is the same. The FDA doesn’t know anything about SaaS architectures unless it’s submitted as part of a medical device 510(k).

I thought I’d take a more detailed look at the architecture we’re talking about so we can explore some of the issues that need to be addressed when implementing this type of functionality.

mdds2.jpg

This is a simplified view of the way medical devices typically interface to the outside world. The Communications Server transmits and receives data from one or more medical devices via a proprietary protocol over whatever media the device supports (e.g. TCP/IP, USB, RS-232, etc.).

In addition to having local storage for test data, the server could pass data directly to an EMR system via HL7 or provide reporting services via HTTP to a Web client.

There are many other useful functions that external software systems can provide. By definition though, a MDDS does not do any real-time patient monitoring or alarm generation.

Now let’s look at what needs to be controlled and verified under these circumstances.

  1. Communications interaction with proper medical device operation.
  2. Device communications protocol and security.
  3. Server database storage and retrieval.
  4. Server security and user authentication.
  5. Client/server protocol and security.
  6. Client data transformation and presentation to the user (including printed reports).
  7. Data export to others formats (XML, CSV, etc.).
  8. Client HIPAA requirements.

Not only is the list long, but these systems involve the combination of custom written software (in multiple languages), multiple operating systems, configurable off-the-shelf software applications, and integrated commercial and open source libraries and frameworks. Also, all testing tools (hardware and software) must be fully validated.

One of the more daunting verification tasks is identifying all of the possible paths that data can take as it flows from one system to the next. Once identified, each path must be tested for data accuracy and integrity as it’s reformatted for different purposes, communications reliability, and security. Even a modest one-way store-and-forward system can end up with a hundred or more unique data paths.

A full set of requirements, specifications, and verification and validation test plans and procedures would need to be in place and fully executed for all of this functionality in order to satisfy the FDA Class II GMP requirements. This means that all of the software and systems must be complete and under revision control. There is no “implementation independent” scenario that will meet the GMP requirements.

It’s no wonder that most MDDS vendors (like EMR companies) don’t want to have to deal with this. Even for companies that already have good software quality practices in place, raising the bar up to meet FDA quality compliance standards would still be a significant organizational commitment and investment.

Moving Mountains With the Brain

In today’s New York Times business section there’s a piece called: Moving Mountains With the Brain, Not a Joystick. I’ve previously discussed both of the mentioned EEG-based headsets here.The article highlights some of the problems that this type of technology will face in the consumer marketplace:

“Not all people are able to display the mental activity necessary to move an object on a screen,” he said. “Some people may not be able to imagine movement in a way that EEG can detect.”

I agree. Even though Emotiv claims that “all 200 testers of the headset had indeed been able to move on-screen objects mentally” it’s very doubtful that the device will have that level of success (100%!) with real gamers.They talk about the use of facial muscle activity (EMG) in addition to the EEG signal. With proper electrode placement, I think EMG holds far more promise for enhancing the gaming experience. Even EOG could be used effectively as a feedback and control mechanism. Reliable EEG processing for this purpose is still a long way off.

UPDATE (6/17/08): More of the same: No Paralysis in Second Life

UPDATE (6/29/08): Here’s a pretty good description of how these devices are being used for control purposes:  OCZ’s Neural Impulse Actuator (The flying car of control schemes).

UPDATE (7/21/08): An even more thorough evaluation: OCZ NIA Brain-Computer Interface. A generally positive and realistic assessment of the technology:

…the NIA isn’t a replacement for traditional input methods, it is merely a powerful supplement.

brainwaves.jpg

Goosh, a Google Command Line

For us old Unix hackers, Goosh, a Google Command Line is very cool.

Check in out here: Goosh.org.

Subscribe

Categories

Twitter Updates