The Problem With Google And Why You Should Care

When I read The Case Against Google in The New York Times last week it was with a typical technology interest eye. It was like reading the local paper about hit-and-runs, robberies, or the latest political scandal. Somewhat interesting, but it really doesn’t affect me (thankfully). Or so I thought.

Then Medgadget published Our Case Against Google, which is a comprehensive (and damning) indictment of Google and the “GoogleFacebook duopoly”. Their bottom line:

Google is an evil monopoly.

This is not a new red flag. Even nine years ago there were concerns: Is Google a Monopoly? Just ask Stack Overflow (and me). Note that this site’s Google search traffic in 2009 was 95.9%. Now it’s 98.4%, mostly because there are fewer search engine competitors around today.

Here’s an overly simplistic summary of the effects of these monopolistic behaviors:

  1. It kills innovation. As the Raffs journey shows, superior technology can be easily crushed.
  2. It kills high-quality content, which is well-documented in the Medgadget article.

Companies trying to innovate or content providers that are dependent on ad revenue for survival are, of course, directly affected by this. But I’m not either of those, so how does this affect me?

I’m an Android/Gmail/Google Docs&Maps person (i.e. no Apple here). I take it for granted that all of these wonderful Google-supplied technologies and conveniences are free. Google funds these goodies through their anti-competitive tactics and biased search algorithms. Does this mean that I’m benefiting from Google’s bad behavior?  No duh!

So the logical conclusion is that my Google freebies aren’t free after all.

Technology innovation and high-quality content are also things that I take for granted. But in reality, these are being sacrificed and are the actual cost. The struggles (and potential failure) of companies like Foundem and Medgadget is a very high price to pay, and it’s happening all the time as a result of Google’s behavior.

Why you should care: Monopolistic behavior carries this high price for all of us. This is true no matter what technology you use.

Modern-day technology anti-trust litigation (including the 1998 Microsoft case) involve complex legal/business/technology issues that are well worth becoming educated about.

Unfortunately, battling 800-pound Gorillas is a difficult business.  Asking this small med-tech community to raise awareness wherever possible is the least we can do.

Thanks for reading!

Update (3/22/18): Google and Facebook can’t help publishers because they’re built to defeat publishers


React JS Dynamic DOM Generation

I had implemented an Angular 4 dynamic DOM prototype using the Angular Dynamic Component Loader and wanted to do the same thing with React JS. After doing some research I found that it was not very obvious how to accomplish this.

By the time I was done there ended up being two functional components worth sharing:

  1. Dynamic component creation using JSX.
  2. JSON driven dynamic DOM generation.

Source code for the demo project is here: reactjs-dynamic-dom-generation

Try demo app live! (with CodeSandbox)

The project was created using create-react-app. The only other package added was axios for making the AJAX call to retrieve the JSON content.

Dynamic Component Creation

With JSX, dynamic content generation turned out to be pretty simple. The core piece of code is in DynamicComponent.js:

In the demo application, all available components register themselves via the ComponentService, which is just a singleton that maintains a simple hash map. For example:

As highlighted on lines 17-18, the desired React Component is first fetched from the ComponentService and then passed to JSX via <this.component … />.

The JSX preprocessor converted this embedded HTML into Javascript with the ‘React Element’ type set to the passed Component along with the additional attributes. I.e. if the UI type was ‘switch’, the hard-coded HTML would have been <SwitchComponent … /> which is a perfectly acceptable JSX template.

Voilà, we have created a dynamic DOM element!

Note that Vue.js applications using JSX can use the same technique except they pass a Vue Component instead.

JSON Driven Dynamic DOM Generation

In order to demonstrate dynamic DOM generation I have defined a simple UI JSON structure. The demo uses Bootstrap panels for the group and table elements and only implements a few components.

The UI JSON is loaded from the server when the application is started and drives the DOM generation. A DynamicComponent is passed a context (i.e. its associated JSON object) along with a path (see below). Each UI element has the following attributes:

  • name: A unique name within the current control context. It is used to form the namespace-like path that allows this component to be globally identified.
  • ui: The type of UI element (e.g. “output”, “switch”, etc.). This is mapped by the ComponentService to its corresponding React Component. If the UI type is not registered, the DefaultComponent is used.
  • label: Label used on the UI.
  • controls: (optional) For container components (“group”, “table”), this is an array of contained controls.
  • value: (optional) For value-based components.
  • range: (optional) Specifies the min/max/step for the range component.

This structure can easily be extended to meet custom needs.

There are a number of implementation details that I’m not covering in this post. I think the demo application is simple enough that just examining and playing with the code should answer any questions. If not, please ask.

The example UI JSON file is here: example-ui.json:

The resulting output, including console logging from the switch and range components, looks like this.

This is, of course, a very minimal implementation that was designed to just demonstrate dynamic DOM generation. In particular, there is no UI event handling, data binding, or other interactive functionality that would be required to make a useful application.


The Real Impediment to Interoperability

Medical device interoperability is one of my favorite subjects. With the meteoric rise of  IoT, there’s more and more discussion like this: Why we badly need standardization to advance IoT.

The question for me has always been: Why is standardizing communications so hard to achieve?  Healthcare providers, payors, EMR vendors, etc. have their own incentives and priorities with respect to interoperability.  The following is based on my experiences as a medical device developer and has many similarities to the IoT world.  As such, these observations are probably not applicable to many parts of the healthcare domain.

The Standard API

Let’s use a simple home appliance scenario to illustrate why interoperability is so important. Let’s say you have a mobile application that wants to be able to control your dishwasher. It may want to start/stop operation, show wash status, or notify you when a wash is complete.  An App without and with interoperability are shown here:


  • Without a standard API: The application has to write custom code for each dishwasher vendor. This is a significant burden for the App developer and prevents its use by a wider customer base.
  • With a standard API: New dishwasher models that implement the “dishWasher API” will just work without having to change the application (ideally anyway).  At the very least, integration of a new model is much easier.

Having a standard API that every App (and as importantly, other devices) can use to interoperate is critical for IoT (appliances and medical devices) growth. Besides all of the obvious benefits, in the healthcare industry the stakes are even higher (from Center for Medical Interoperability — Need for Change):

It will improve the safety and quality of care, enable innovation, remove risk and cost from the system and increase patient engagement.

The other important thing to note is that the API communication shown above requires full Semantic interoperability. This is the most rigorous type of interoperability because the App must understand the full meaning of the data in order to function properly. E.g., knowing that a temperature is in ºF as opposed to ºC has significant consequences.

Let me also point out that even though semantic interoperability is not easy, the barriers to achieving it are generally not technical. There may be points of contention on protocols, units of measure, API signatures, and functional requirements, etc., but when you’re working within a specific discipline these can usually be worked out.  Non-healthcare industries (telecom, banking, etc.) have proven it can be done.

Cost of Standards

There are a number of adoption hurdles for using standards (e.g. HL7, FHIR, etc.). The cost of implementing and maintaining compliance with a standard are non-trivial.

  • The additional development and testing overhead required. On the development side, these interfaces are many times not ideal for internal communication and can have a performance impact.
  • Some standards have a certification process (e.g. Continua Certification Process) that require rigorous testing and documentation to achieve.
  • If you have a data element that the standard does not currently cover, you may be faced with having to deal with the standard’s approval process which can take a significant amount of time. For example, see the FHIR Change Request Tracking System which currently has thousands of entries. Again, this is not a technical issue. Having to deal with bureaucracy is just part of the overhead of conforming to a standard.

Company Motivations

Now let’s try to understand what’s important to a company that’s trying to develop and market a product:

  1. Product differentiation. Provide vendor-unique features (a “niche”) that are not available from competitors.
  2. Time to market. Being there first is critical for brand recognition and attracting customers.
  3. One-stop shop (multi-product companies). “If you use our product family your experience will be seamless!”

The last item is particularly important. Following the appliance theme:

This strategy is of course how Apple became the largest company in the world. In most industries, the big companies have the largest market share. This “walled garden” approach is the most natural way to lock out both large and small competitors.

The First Hint of Problems

Notice that the cost of interoperability can affect all three of the market goals a company is trying to achieve. Standards are:

  1. A “me too” feature.
  2. They take time to implement.
  3. They punch holes in the desirable closed platform.

The actual impact depends on a lot of factors, but it can be significant.

The Real Impediment

But the real elephant in the room is this: Return on Investment:

The ROI on interoperability is inherently very low and often negative (Gain < Cost). This is because:

  1. As noted above, conforming to an external standard has a significant cost associated with it.
  2. Lack of demand. Interoperability is not something a customer is willing to pay extra for (zero Gain).

I think companies really do care about patient safety, quality of care, and healthcare cost reduction. This is what motivates their business and drives innovation. The reality is that ROI is also a factor in every product decision.

Side note: If conforming to a standard was mandated as a regulatory requirement, then the ROI becomes moot and the expense would just be part of the cost of doing business.

I’m sure that interoperability is on every company’s feature backlog, but it’s not likely to become a primary actionable priority over all of the other higher ROI functionality. Those other features also contribute to improving healthcare, but the bottom line is hard to ignore.

Contributing resources and putting a logo on a standards organization’s sponsor website is not the same thing as actually implementing and maintaining real interoperability in a product.

Apologies for the cynicism. It’s just frustrating that nothing has really changed after all these years. Interoperability: Arrested Progress is close to four years old, and same old, same old (insanity) still prevails.

I think the reasons outlined here are a plausible explanation of why this is so. We’re all still waiting for that game-changer.

Canine Mind Reading

That’s right! It is wallace-shawn-inconceivable that the Indegogo No More Woof campaign raised over $22,000 from 231 contributors. The project has been around since late 2013, but this is the first time I’ve run across it (via the recent IEEE article below). I just couldn’t resist posting the picture.

It goes without saying that the Scandinavian-based company NSID (currently “hibernating”) failed to deliver on its promise. This is well chronicled by IEEE: The Cautionary Tale of “No More Woof,” a Crowdfunded Gadget to Read Your Dog’s Thoughts.

The article even mentions Melon, a human EEG headband Kickstarter that I was involved with. I feel fortunate that I actually received a working device.

BCI is very difficult even under the best of circumstances with humans. I think the correct thought sequence for working with any EEG-based device is:

  1. “I’m excited”
  2. “I’m curious who that is?”
  3. “I’m tired”


Publishing an Angular 2 Component NPM Package

It was suggested on the Angular 2 Password Strength Bar post that I publish the component as an NPM package. This sounded like a good sharing idea and it was something I’ve never done before. So, here it is.

You should go to the Github repository and inspect the code directly. I’m just going to note some non-obvious details here.

Application Notes:

  • Added in-line CSS with the @Component styles metadata property.
  • In addition to passwordToCheck, added client configurable barLabel parameter.

Project Notes:

  • src: This is where the PasswordStrengthBar component (passwordStrengthBar.component.ts) is. The CSS and HTML are embedded directly in the @Component metadata. Also, note that the tsconfig.json compiles the Typescript to ../lib which is what is distributed to NPM (app and src are excluded in .npmignore).
  • The index.d.ts and index.js in the root directory reference ./lib to allow importing the component without having to specify a Typescript file.  See the How TypeScript resolves modules section in TypeScript Module Resolution.  I.e. after the npm installation is complete you just need this:

Development Notes:

Overall (and briefly), I find the Typescript/Javascript tooling very frustrating. I’m not alone, e.g.: The Controversial State of JavaScript Tooling. The JSON configuration files (npm (package.json), Typescript, Karma, Webpack, etc.) are complex and the documentation is awful.

The worst part (IMO) is how fragile everything is. It seems like the tools and libraries change rapidly and there’s no consideration for backward compatibility or external dependencies. Updating versions invariably breaks the build. On-line fixes many times take you down a rabbit hole of unrelated issues. If you’re lucky, the solution is just to continue to use an older version. Use npm-check-updates at your own risk!


If you have questions or problems, find a bug, or have suggested improvements please open an issue. Even better, fork the project, make the desired changes and submit a pull request.


Angular 2 Password Strength Bar

I spent a little time updating AngularJS Directive to test the strength of a password to be a pure Angular 2 component and thought I’d share.

A working demo and all of the code can be found here: Angular 2 Password Strength Bar.


  • Upgraded to Typescript and used the OnChanges interface.
  • Incorporation of the bar is now component-based:

<password-strength-bar [passwordToCheck]="account.password"></password-strength-bar>

  • Removed direct DOM modification and replaced with Angular 2 dynamic in-line styles.
  • Removed JQuery dependence.


Old Nerds

Nobody is immune from aging.

In the tech industry, this can be a problem as described in Is Ageism In Tech An Under-The-Radar Diversity Issue?.  Programmer age distribution from the Stack Overflow Developer Survey 2016 Results clearly shows this:

2016-Stack Overflow Developer Survey 2016 Results

Worth noting:

  • 77.2% are younger than 35.
  • Twice as many are < 20 then are over 50.

Getting old may suck, but if problem-solving and building solutions are your passion being an old nerd (yes, I’m way over 35) really can look like this:
There’s a lot of reasonable advice in Being A Developer After 40, but I think this sums it up best:

As long as your heart tells you to keep on coding and building new things, you will be young, forever.

I sure hope so! 🙂

UPDATE 13-Oct-16: Too Old for IT

Melon Headband Android SDK

It appears that the Melon Headband Alpha Android SDK is no longer available from Melon. See Melon Headband — Android Beta.

Below is a copy of the SDK that I received in April 2015. I successfully built and ran the AndroidMelonBasicSample application on my Motorola phone. It actually communicated with the Melon headband!

Melon was purchased by DAQRI in February 2015. They still maintain a Melon product page, but the Google+ Melon Headband – Android Users community (see update below) has been all but silent for over 6 months.  That plus the website message “We’re back in the lab crafting new things” is a good indication that Melon development is no longer active.


Update (4/6/16): The community has shut down:


Deep Learning

deepLearningAI500I recently attended a Deep Learning (DL) meetup hosted by Nervana Systems. Deep learning is essentially a technique that allows machines to interpret sensory data. DL attempts to classify unstructured data (e.g. images or speech) by mimicking the way the brain does so with the use of artificial neural networks (ANN).

A more formal definition of deep learning is:

DL is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures,

I like the description from Watson Adds Deep Learning to Its Repertoire:

Deep learning involves training a computer to recognize often complex and abstract patterns by feeding large amounts of data through successive networks of artificial neurons, and refining the way those networks respond to the input.

This article also presents some of the DL challenges and the importance of its integration with other AI technologies.

From a programming perspective constructing, training, and testing DL systems starts with assembling ANN layers.

For example, categorization of images is typically done with Convolution Neural Networks (CNNs, see Introduction to Convolution Neural Networks). The general approach is shown here:

Construction of a similar network using the neon framework looks something like this:

Properly training an ANN involves processing very large quantities of data. Because of this, most frameworks (see below) utilize GPU hardware acceleration. Most use the NVIDIA CUDA Toolkit.

Each application of DL (e.g. image classification, speech recognition, video parsing, big data, etc.) have their own idiosyncrasies that are the subject of extensive research at many universities. And of course large companies are leveraging machine intelligence for commercial purposes (Siri, Cortana, self-driving cars).

Popular DL/ANN frameworks include:

Many good DL resources are available at: Deep Learning.

Here’s a good introduction: Deep Learning: An MIT Press book in preparation

Creating a Minimally Sized Docker Image

dockerThis is a follow up to the Publishing a Static AngularJS Application with Docker post.

Relative to the size of a standard Ubuntu Docker image I thought the 250MB CoreOS image was “lean”. Earlier this month I went to a Docker talk by Brian DeHamer and learned that there are much smaller Linux base images available on DockerHub. In particular, he mentioned Alpine which is only 5MB and includes a package manager.

Here are the instructions for building the same Apache server image from the previous post with Alpine.

The Dockerfile has significant changes:

Explanation of differences:

line 2: The base image is alpine:latest.

lines 4-5: Unlike the CoreOS image, the base Apline image does not include Apache. These lines use the apk package manager to install Apache2 and clean up after.

lines 6-7: Runs the exec form of the Dockerfile ENTRYPOINT command. This will run httpd in the background when the image is started.

line 8: The static web content is copied to a different directory.

Building and pushing the image to DockerHub is the same as before:

Because of the exec additions to the Dockerfile, the command line for starting the Docker image is simpler:

The resulting Docker image is only 10MB as compared to 290MB for the same content and functionality. Nice!

UPDATE (12-Jun-17): Here’s an even smaller image: Stuffing Angular into a Tiny Docker Container (< 2 MB)



Twitter Updates