This post documents the development journey and some implementation details of a relatively simple Emacs package and is arguably overkill for such a small piece of functionality. On the other hand, some of these details may be useful for others who want to develop similar functionality.
If you're not interested, skip to the last section for a brief package description, or see the full package here: password-menu.
Making life easier
The need to automate something usually becomes apparent when an often-used workflow is awkward or inefficient. The effort to create the automation will hopefully pay off over time. I.e. ideally, the ROI will be high. This is why Emacs users spend so much time tweaking their configurations, even though I suspect the ROI may not be as high as they'd like!
In my case, the pain point was about the repeated need to provide passwords and tokens in a variety of situations. This is a common scenario — websites, CLI logins, curl/Postman, etc. all need credentials.
In addition to using auth-source (authinfo.gpg) I had gotten into the bad habit of scattering credentials around various org-mode files (unencrypted! 😞). When I needed a password, it required a search, buffer open, select, copy, and then paste to the target to complete. This is (and felt) very clumsy and inefficient. The addition of Org Mode custom link: copy to clipboard made the select/copy a little easier, but the overall experience still sucked.
The search for an existing solution came up empty. This was surprising given that the Emacs package ecosystem (Melpa) is extensive and has been around for a long time.
One of the distinguishing features of Emacs is the ability to customize and extend its functionality with Elisp. So, here we go.
Feature #1: Transient prefix menu
My approach was to use the auth-source-search API along with the transient package to provide the menu UI. This type of "porcelain" has become popular for Emacs projects that have complex user interfaces (Magit! being the best example). My use case is far simpler, but the UI style was still desirable.
The only real Elisp challenge (for me anyway) was creating a dynamic transient prefix list. I couldn't find any examples of this. Transient prefixes are implemented as vectors. Also, note that the transient interface is very complex (Prefixes, Suffixes, Infixes) but this use case only needed prefixes.
A typical transient prefix group looks like this (excerpted from transient-showcase).
Dynamically creating a vector like this can be implemented with this pseudocode:
1
2
3
4
5
6
7
8
9
10
11
12
(defun get-prefix-list()
"Create dynamic transient prefix list."
(vconcat'["Get password for: "]
(apply #'vector
(mapcar
(lambda(source))
;;Extract user andhost from source
(list
(get-next-picker-string)
(concat user"@"host)
`(lambda()(interactive)(get-password,user,host)))
(get-sources)))))
You won't find this code in password-menu.el because it was refactored with macros as described below.
An Elisp expert could have knocked out this implementation without breaking a sweat, but it was a learning experience for me.
The development of the picker string functionality (1..0, a1..a0, b1..b0,...) was also a lot of fun.
Feature #2: Completing-read menu
All done, right? In the middle of the transient work, I ran across a post that included a good description of using completing-read with a list. The completing-read list interface could leverage the same core transient prefix list elements. Specifically, the user@host string menu item and the lambda that gets the password. The prefix picker string is not needed.
The implementation was refactored with a couple of macros so both UI interfaces could share the list generation. I won't dig into the details here, but the password-menu.el code should be self-explanatory. Getting the completing-read list boiled down to this:
1
2
3
4
(defun password-menu-get-completing-list()
"Get the completing list from the password sources."
(password-menu--get-source-list
(password-menu--selection-item)))
The transient version uses these same two macros, but is a little more complex. This shows the usefulness of macros and was another Elisp learning experience.
Feature #3: Kill ring and clipboard expiration
Finally, while investigating other password management packages I ran across this kill ring expiration implementation. It makes sense to remove the secret from the kill ring and system clipboard automatically so I incorporated their code pretty much unmodified.
This additional functionality is like icing on the cake.
Epilogue
I've been dogfooding password-menu for a while. I use it often and can safely say it has improved the "get credential" experience. Before, it was "Oh crap, here we go...". Now it's simply C-x j 3 from anywhere, then paste. Easy peasy!
The other benefit is that I've now removed all those unencrypted secrets from my org files and have everything tucked away in one secure place: authinfo.gpg. That's a big win too.
The password-menu package
Password-menu is a UI wrapper ("porcelain") for the built-in Emacs auth-source secrets library. This package allows you to display auth-source entries in the minibuffer with either completing-read or transient. The password for the selected entry is copied to the kill ring and system clipboard and automatically removed later.
I read a lot of RSS feeds on a select set of topics (see About | Bob's Content of Interest). I sometimes tweet/toot about individual posts but have the desire to expand that capability. I'd like to be able to regularly (and efficiently) publish a curated collection of articles. There are three primary functional requirements needed to accomplish this:
Collection: Deciding which articles you want to export (publish). Filtering can be done based on their title, subject matter (tags), and time constraints. My preference is to specifically mark (or tag) selected entries independent of their subject matter (see below).
Annotation: Some article titles speak for themselves, but others are best presented with associated comments that allow the reader to know what's special about the content. You need the ability to add annotations to individual articles that are included in the published result.
Publication: Once a set of articles has been identified, exporting them in an easily consumable format is the next step. One important component of exporting this content is grouping the articles based on their subject matter.
The Investigation
There are many RSS feed aggregators out there, but there was no solution that even came near to addressing the curation requirements listed above. Apparently, all of those link collection sites are just rolling their own.
As a software developer, finding such a glaring functionality gap that needs to be filled is a real win-win! 🎉 Not only is this something I want to use, but there are probably a few others who will also find a solution helpful.
Now all I had to do was design and develop that solution. I've been using Emacs and Elfeed as my RSS reader for many years. Extending Emacs functionality is a cult-like activity that attracts many. I'm not brain-washed, but even as a (non-evil) Doom user, I do spend a lot of time tweaking my Emacs configuration.
Anyway, providing this RSS curation functionality as an elfeed extension was not only the ideal technical solution, but it was also the perfect opportunity to author my first Emacs package (another win, I hope).
The Solution
Elfeed-curate is an add-on to the elfeed Emacs-based RSS feed management system that provides the ability to easily curate RSS feed entries.
Elfeed's tagging and search functionality takes care of the collection requirements and elfeed-curate adds annotation and publication (exporting) capabilities.
I have an opinionated workflow that looks like this:
A key factor (essentially, a non-functional requirement) for making this workflow practical is that each step (marking, annotation, export review, etc.) has to be fast. I think the combination of elfeed and elfeed-curate accomplishes this. I'm also sure there will be refinements and improvements in the future.
I recently posted this Clojurians Slack re-frame question:
I have a deps.edn/figwheel-main re-frame project that I'm trying to add re-frame-10x to. The deps.edn (day8.re-frame/re-frame-10x {:mvn/version "1.5.0"}) and dev.cjls.edn (:preloads [day8.re-frame-10x.preload]) seem correct and the project builds without errors. When the web app is started though, I get this run-time error:
Uncaught Error: Bad dependency path or symbol: highlight.js.lib.core
I and others have also seen this type of build-time error:
No such namespace: highlight.js/lib/core, could not locate highlight/js_SLASH_lib_SLASH_core.cljs, highlight/js_SLASH_lib_SLASH_core.cljc, or JavaScript source providing "highlight.js/lib/core" (Please check that namespaces with dashes use underscores in the ClojureScript file name) in file target/public/cljs-out/dev/re_highlight/core.cljs
re-highlight (a re-frame-10x dependency) is built with shadow-cljs while the re-frame project is built with lein/deps.edn/figwheel-main. The re-frame project does not have a dependency on either re-highlight or highlight.js. The challenge here is providing the highlight.js (an NPM library) dependencies to re-highlight.
There are ways to include NPM libraries in ClojureScript projects, but each is specific to a particular build system:
This cross-build system situation seems unique though. Providing the highlight.js library to re-highlight turned out to be an eye-opening deep dive into ClojureScript build systems. See the Commentary section at the end of this post.
Long story short, I was able to find a solution for exposing NPM libraries to shadow-cljs projects included in a deps.edn build.
It involves creating a Javascript bundle containing the needed NPM dependency (highlight.js) using webpack and then "manually" making it available to re-highlight.
Here is a step-by-step guide for adding re-frame-10x to a deps.edn re-reframe project.
First, create package.json with the needed dependencies.
Shell
1
2
3
4
5
6
npm install highlight.js--save
npm install webpack webpack-cli--save-dev
# Manually add this to package.json:
# "scripts": {
# "build": "webpack --mode=development"
# },
Final package.json:
1
2
3
4
5
6
7
8
9
10
11
12
{
"scripts":{
"build":"webpack --mode=development"
},
"dependencies":{
"highlight.js":"^11.5.1"
},
"devDependencies":{
"webpack":"^5.74.0",
"webpack-cli":"^4.10.0"
}
}
Create src/js/main.js with these contents. The require() statements are needed so webpack will include the NPM libraries in the output bundle.
Note: The paths and JS build file names (e.g. app.js) above may not match your specific project structure. If so, they would need to be adjusted accordingly.
Add re-frame-10x dependencies to deps.edn:
JavaScript
1
2
3
4
5
...
day8.re-frame/tracing{:mvn/version"0.6.2"}
day8.re-frame/test{:mvn/version"0.1.5"}
day8.re-frame/re-frame-10x{:mvn/version"1.5.0"}
...
The dev.cljs.edn file needs the following so re-frame-10x is loaded properly:
1
2
3
4
5
6
...
:closure-defines{
"goog.DEBUG"true
"re_frame.trace.trace_enabled_QMARK_"true}
:preloads[day8.re-frame-10x.preload]
...
Run the project:
Shell
1
2
3
4
npm run build# Creates bundle.js
lein repl
# or
lein figwheel
Voilà! The highlight.js dependency in re-highlight is satisfied and re-frame-10x runs as expected.
There must be a better way to do this. I just couldn't find it...
Commentary
This solution seems rather hacky and took way too long to discover. I can't tell you the number of rabbit holes I went down with the NPM inclusion methods listed above. Each of them just uncovered further dependency and configuration issues or would have resulted in undesirable refactoring. The code base I'm working with is rather large and I didn't want to completely change the build system just to add a development tool.
To be honest, the CLJ/CLJS build tools and their cross-pollinated dependency systems (lein, shadown-cljs, etc.) are very confusing. There is no idiomatic/standard way to build Clojure(Script) projects. Everyone is using a different combination or permutation of build systems. Also, the clojure/clj CLI and tooling just plain suck. I think these things are a real barrier to Clojure(Script) adoption.
The Clojure and Scheme Compared comments about Peter Bex's Clojure from a Schemer's perspective article caught my attention. I know that discussion is about language features, but it got me thinking about the different criteria used for selecting a programming language like Clojure. E.g., it's interesting that Irreal considers the JVM a negative, while I consider it a positive. It just goes to show that every situation is unique, i.e. there is no right or wrong in these types of technology decisions.
I've only dabbled with Clojure over the last few years. See Exploring Clojure (and FP vs OOP). The real motivation was to explore the advantages of functional programming. Shifting your fundamental programming paradigm from OOP to FP has far-reaching impact. Language warts are not going to be a major factor in determining your success in doing this type of transformation.
The other major language/technology selection considerations involve organizational headwinds. For most large companies, there are three major challenges:
Inertia. Convincing management that an esoteric language and techniques (FP) are worth diverting and training existing personnel is a difficult hill to climb. I also think there's a certain amount of organizational entrenchment going on here. For example, the C#/CLR vs. Java/JVM divide is really more cultural than technical. Because niche technologies like Clojure/FP are also generally viewed in this cultural context ("esoteric"), they don't stand a chance.
Talent. Unless you're in Finland, it's difficult to find qualified people. This is also an on-going issue because programs developed today need to be maintained for the long-term (years). With all-remote employment now being more mainstream, maybe this will become less of an issue.
Trends. As you can see from the 5-year trends below, Clojure has been on a slow decline. Also, Lisp languages are minor players. They are orders of magnitude smaller than the mainstream (Java, JavaScript, Python) and do not show up on the TIOBE top 20 lists (Lisp is #36).
Even if you're a small company or a startup, selecting Clojure is still a tough call. This would likely only happen if there were a critical mass of programmers (#2) that already had positive Clojure/FP development experiences.
There are many good reasons to choose Clojure as a front-end (JavaScript) and/or back-end (JVM) technology solution. I would love to see first-hand how well Clojure/FP performs on a large-scale project. Unfortunately, there are also plenty of non-technical reasons that prevent organizations from choosing Clojure.
This organizational structure increases the speed of feature delivery and allows for experimentation to further improve the customer experience. Tooling and automation ("paved roads") are key. The model that Netflix came up with:
"Full cycle developers" is a model where a development team, equipped with amazing developer productivity tools, is responsible for the full software life cycle: design, development, test, deploy, operate, and support.
If you work for a large enough enterprise, you likely have teams of people that provide the following functions:
Product development (creates and designs applications software and includes architects, product owners, and scrum masters)
Quality assurance (QA). They test the software. For a medical device company, we call this team Verification and Validation (V&V)
Site Reliability Engineering (SRE). Ensures scalability and reliability of the infrastructure and applications. They do performance testing and may implement some Chaos engineering techniques.
Development operations (DevOps). Manage the code repositories, shared development tools, CI/CD pipelines, middleware, databases, etc.
Infrastructure management (on-prem hardware and operating systems)
Cloud management (same as above, but in the cloud)
Applications support (monitor and manage applications in production)
Do not confuse FCTs with "Full Stack Teams" (see Full Stack Pronounced Dead). This "stack" refers to technologies that are used to implement a typical web-based application (e.g. LAMP).
FCTs are about supporting functionality end-to-end (product idea to production), but both have the challenge of developer specialization in common. A FCT has to broaden their skill-set even further to include application/infrastructure deployment, monitoring, and support. This is the future!
Full Cycle Team Challenges for Medical Device Companies
The transformation from a legacy organization (as described above) to FCTs is made even more challenging for a medical device company creating software that has to maintain FDA regulatory controls (see Quality System Regulation Subpart C–Design Control§ 820.30).
Below is a list of regulatory and transition considerations that impact the release process. Most are associated with keeping the Design History File (DHF) documentation up-to-date. The organizational challenge in a FCT world is figuring out who is responsible for these tasks.
Spoiler alert: The suggested answers should be obvious, but many times the best I can do is just ask the question. Every organization, and even different teams within a single organization, will have different solutions. These can be tough problems to solve. Don't shoot the messenger!
Medical Device Data System (MDDS)
Not all of your software may be under FDA Class II/III regulatory controls. Some could fall under MDDS, see Identifying an MDDS. There is still some risk associated with MDDS but special controls and premarket approval -- the 510(k) -- are not necessary (see MDDS Rule).
MDDS software requires the same QMS documentation (see MDDS Section VI-E. Current Good Manufacturing Practices (CGMP)/QS Regulation/MDR Compliance of the rule) so most of the items listed here still apply.
Also, see Comment 25 from the rule which addresses "modular software". I.e mixing MDDS components with medical device components. The response says "The MDDS regulation does not necessarily prevent modular implementation.", but the FDA can't make a "generalized determination" on the various ways these combinations may be made. This may be a situation you run into and the FDA suggests it is best to contact them if you have questions.
Based on the intended use and the safety risk associated with the software to be developed, the software developer should determine the specific approach, the combination of techniques to be used, and the level of effort to be applied. While this guidance does not recommend any specific life cycle model or any specific technique or method, it does recommend that software validation and verification activities be conducted throughout the entire software life cycle.
FDA guidance documents, and FDA regulations in general (e.g. IEC 62304), tell you what to do, but leave the how up to the organization.
Let's highlight the SRS* to System test verification from the V-model. This is essentially end-to-end testing. In a microservice-based architecture, each FCT is likely responsible for different sets of services. These services may be dependent on the services provided by other teams.
Which team is responsible for ensuring that the entire system is functioning properly (i.e. end-to-end test protocols and results) after changes are made to one or more of these services?
In an ideal world, these end-to-end tests are completely automated, but even then someone still needs to maintain them.
Validation testing (was the right product built?) presents even more challenges as a single FCT is may only be responsible for a small portion of the entire product.
Risk analysis is typically done by a cross-functional team that may span multiple business units, but it is probably not unreasonable for the FCT Product Owner to drive this process and get the documentation updated as needed.
Traceability
From the FDA guidance:
A source code traceability analysis is an important tool to verify that all code is linked to established specifications and established test procedures.
Creating this documentation is well suited for automation. It still requires ensuring that all requirements and related test scenarios are properly tagged so they can be parsed to produce a release report.
Software Design Evidence
From the FDA guidance:
The Quality System regulation requires that at least one formal design review be conducted during the device design process. However, it is recommended that multiple design reviews be conducted (e.g., at the end of each software life cycle activity, in preparation for proceeding to the next activity).
This is a challenge for any Agile-based development process so is not specific to the FCT-based organization. Running formal design reviews as early in the development process as possible should be a team responsibility.
Manual Approval Gates
For many unregulated software products continuous integration (CI) and continuous delivery (CD) is a reality. I.e. Code can be pushed, run through the CI/CD pipelines, and delivered to customers without human intervention.
It is very unlikely (not impossible though I suppose, depending on the product) that this would occur for FDA-regulated software. Even with automated document generation, software deployment to production will still require human sign-off steps and audit trails.
Off-The-Shelf (OTS) Software
OTS/SOUP Software Validation documentation needs to be kept up-to-date. This is mostly a book-keeping exercise for OTS/SOUP that is part of the software product. For tools though, see OTS/SOUP Software Validation Strategies.
Another consideration to keep in mind for including 3rd party software into your product is the software license. The corporate (legal) policy should dictate license requirements, but teams would be aided by an automated tracking process.
Infrastructure
Installation, operational, and performance qualification -- IQ/OQ/PQ. FDA regulated software must have these processes in place to ensure that after any changes are made, the infrastructure continues to meet quality requirements. With the microservice architecture becoming a best practice, the team would now be responsible for documenting the IQ/OQ/PQ for their particular microservice or container flavor(s).
Cloud Offerings
Serverless architectures (Note: I'm most familiar with AWS, so will use their cloud products as examples. Azure and GCP have similar offerings.) One of the key advantages of the Lambda, Fargate, RDS, and similar managed/SaaS products is their undifferentiated heavy lifting. AWS is responsible for the care and maintenance of the underlying infrastructure and servers. For on-prem servers, this is something the organization spends significant time and money on, but these expenditures do not directly benefit the customer. Serverless allows companies to focus their efforts on things that make a difference to their customers.
How do you ensure IQ/OQ/PQ quality when you don't have control over the servers that are running your application(s)?
Another consideration: Teams will need to take regulatory impact into consideration when selecting new cloud technologies.
The use of IaC (e.g. CloudFormation or Terraform) may require new release cycle processes. I.e since this code is not part of the application, you may want to have a separate release cycle for when the infrastructure is updated. The same is true for container (Docker) code updates.
The FCT should be responsible for the IaC associated with their product as it directly impacts both functionality and performance.
Transformation
When thinking about transforming to a FCT-based organization, the 2019 AWS re:invent keynote by Andy Jassy comes to mind. His "transformation" is referring to migrating from on-premise to cloud infrastructure (AWS, of course), but I think the non-technical transformation recommendations he outlines (start: 5:04 end 11:48) are also applicable to the FCT organizational change:
I think aggressive goals (item #2) is particularly important. Legacy organizations have a lot of inertia that needs to be overcome in order to move things forward. Breaking those initial barriers is even more difficult when you're having to deal with regulatory requirements.
Bottom Line
FDA regulatory requirements add tasks and documentation to the software release process. This has always been the case for medical device companies, but how this additional work is managed when trying to implement full-cycle teams can be a complicated problem to solve.
Just like unregulated development, providing the tooling to automate these tasks is the key to allow teams to deliver quality software to customers more quickly.
---------------
*SRS, Software Requirements Specification. The old-school water-fall requirements document. I don't miss those days!
This is just a quick note that will hopefully save someone time.
When I upgraded Windows 10 (64-bit) from 1909 to 2004 I found that Virtualbox 6.1.x no longer worked properly. All of my guest instances (Ubuntu, Mint, etc.) failed to start. Specifically, they just hung with a blinking cursor and there were no errors in the logs.
It turns out that when Windows 10 2004 is installed it enables the Windows Hypervisor Platform feature. Note that the Hyper-V feature was disabled prior to the upgrade and remained so after.
To check this setting run OptionalFeatures.exe from a Windows command shell. You'll see this:
The resolution to the hang problem is to disable this feature. Doing this is simple:
Uncheck the Windows Hypervisor Platform checkbox (above).
Reboot. Even though it's not indicated when you do step #1, a reboot is required to disable the feature.
Pi Day (3/14) is in a couple of days so I want to wish everyone a Happy Pi Day 2020! It's great to see that 55% of American's plan to celebrate and many will be eating pie or pi-themed food (whatever that is).
My work colleague and basketball buddy Stan sells a nerd t-shirt here: 314 Digits of Pi.py. It has the Python code on the front and the results on the back. I "won" one of these at our annual White Elephant gift exchange in December. Even though the Amazon Best Sellers Rank is #12,306,667 in Clothing, Shoes & Jewelry, I really like it!
I've been staring at the code backward in the mirror for a number of months:
This got me wondering what this algorithm would look like in Clojure?
The first pass on the port was pretty straight forward, but I think it's worth noting some of the subtle differences. All of the code is here: pi-digits.
Here's the original Python code and result:
pi.py
Python
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
importmath
defpi():
r=4*(4*arccot(5)-arccot(239))
returnstr(r)[0]+'.'+str(r)[1:-5]
defarccot(x):
total=power=10**319//x
divisor=1
whileabs(power)>=divisor:
power=-power//x**2
divisor+=2
total+=power//divisor
returntotal
print("314 digits of Pi "+pi())
result
ZSH
1
2
$python3 pi.py
314digits of Pi3.14159265358979323846264338327950288419716939937510582097494459230781640628620899862803482534211706798214808651328230664709384460955058223172535940812848111745028410270193852110555964462294895493038196442881097566593344612847564823378678316527120190914564856692346034861045432664821339360726024914127372458700660631
For the interested, here's an explanation of the calculation of Pi using fixed point math for speed improvements: Pi - Machin: The Machin formula, developed by John Machin in 1706 (!), is:
And here's a Clojure version that returns the identical result:
One problem with the
arccot (arc-cotangent) implementation is that it just duplicates the Python logic and is not idiomatic Clojure. Instead of coding this in a non-functional style, i.e. using mutable state (atom), let's create a functional version:
arccot-recur
Clojure
1
2
3
4
5
6
7
8
9
(defnarccot-recur
[x]
(loop[power(quot(pow10319)x)
divisor1
tot0]
(let[total(+tot(quotpowerdivisor))]
(if(>=(abspower)divisor)
(recur(quot(-power)(powx2))(+divisor2)total)
total))))
We use loop/recur for a recursive implementation. This has the benefit of tail-call optimization (TCO). Here are the execution times (average of 10 runs) for calculating Pi with the three implementations:
Method
Digits
Time in seconds
10,000
50,000
100,000
200,000
python
0.158
3.74
14.9
60.3
clojure-while
0.260
5.14
19.8
78.6
clojure-recur
0.252
5.10
19.8
78.5
Python is certainly faster, but the purpose here was not to compare computation speed. It was to get a Clojure version of the t-shirt made! Who's the real nerd now? 🙂
The company I worked for over 10 years ago, CardioDynamics*, manufactured an impedance cardiography (ICG) diagnostic device. The technology behind ICG and the Onera Bioimpedance Patch to Detect Sleep Apnea is called thoracic electrical bioimpedance (TEB).
It's no surprise that Onera has leveraged research on monitoring lung resistivity with this technology (e.g. here and here) and is applying AI for automated respiratory event detection. Since electrode placement is important for reliable data acquisition, the patch is a good design choice, but it doesn't look like it would be that comfortable to wear to sleep.
Another review, Wearable Patch Uses Machine Learning to Detect Sleep Apnea, notes that assessing sleep apnea requires additional physiological signals to be monitored and that more work needs to be done to combine this technology with these other signals.
………………………………………………
*Purchased by SonoSite in 2009. SonoSite has since stopped manufacturing the BioZ DX.
The Consumers Electronics Show (CES) 2020 was held earlier this month in Las Vegas. Of the ~4,400 exhibiting companies, the "Digital Health" category had 573 exhibitors. Of these, I found 9 companies utilizing EEG technology.
Besides the usual sleep aid applications, there are still a lot of focus/meditation/relaxation apps. Mental health screeners seem to be a new trend. Other than the visual cortex monitor (NextMind), BCI devices for use as video game controllers have cooled down.
Here's a quick summary, including the application categories they fall into.
Multi-sensor meditation device that provides real-time feedback on your brain activity, heart rate, breathing, and body movements to help you build a consistent meditation practice.
Healium is the world's first virtual and augmented reality platform powered by brainwaves (uses the Muse 1 headband) and heart rate via consumer wearables.
Learns about your sleep cycle and intelligently adjusts the audio-visual signals to induce lucid dreams, make you fall asleep easily and wake up naturally.
I've always viewed functional programming (FP) from afar, mostly because object-oriented programming (OOP) is the dominant development methodology I've been using (Java, Ruby, C#, etc.) for many years. A majority of the articles I've read on FP have statements like this:
If you’ve never tried functional programming development, I assure you that this is one of the best time investments you can make. You will not only learn a new programming language, but also a completely new way of thinking. A completely different paradigm.
Switching from OOP to Functional Programming gives an overview of the differences between FP and OOP. It uses Scala and Haskell for the FP example code, but I think it still does a good job of getting the major concepts across:
I do not think that FP, or any single paradigm/language/framework/etc. for that matter, is a silver bullet. On the contrary, I'm a true believer in the "right tool for the job" philosophy. This is particularly true in the software industry where there is such a wide variety of problems that need to be solved.
This view for programming paradigms is cute but is actually misleading:
As a developer, it's important to always to be learning new problem-solving approaches. I.e. adding new tools to your tool-belt. This will not only allow you to select the best solution(s) for the job, but you'll be better able to recognize the trade-offs and potential problem areas that might arise with any solution. I think understanding FP concepts will make you a better programmer, but not necessarily because you are using FP techniques and tools.
Don’t be tricked into thinking that functional programming, or any other popular paradigm coming before or after it, will take care of thinking about good code design instead of us.
The purpose of this article is to present my own experiences in trying to use Clojure/FP as an alternative approach to traditional OOP. I do not believe there is anything here that has not already been covered by many others, but I hope another perspective will be helpful.
Lisp
I chose a Lisp dialect for this exploration for several reasons:
I have some Lisp experience from previous projects and was always impressed with its simplicity and elegance. I really wanted to dig deeper into its capabilities, particularly macros (code as data - programs that write programs). See Lisp, Smalltalk, and the Power of Symmetry for a good discussion of both topics.
I'm also a long-time Emacs user (mostly for org-mode) so I'm already comfortable with Lisp.
I investigated a variety of Lisp dialects (Racket, Common Lisp, etc.) but decided on Clojure primarily because it has both JVM and Javascript (ClojureScript) support. This would allow me more opportunity to use it for real-world projects. This is also why I did not consider alternative FP languages like Haskell and Erlang.
Lastly, the obligatory XKCD cartoon that makes fun of Lisp parentheses (which I address below):
There are also many good articles that describe the benefits of FP (immutable objects, pure functions, etc.).
Clojure (and FP) enthusiasts claim that their productivity is increased because of these attributes. I've seen this stated elsewhere, but from the article above:
Clojure completely changed my perspective on programming. I found myself as a more productive, faster and more motivated developer than I was before.
It also has high praise from high places:
Who doesn't want to hang out with the smart kids?
Clojure development notes
The following are some Clojure development observations based on both the JSON Processor (see below) and a few other small ClojureScript projects I've worked on.
The REPL
The read-evaluate-print-loop, and more specifically the networked REPL (nREPL) and its integration with Emacs clojure-mode/cider is a real game-changer. Java has nothing like it (well, except for the Java 9 JShell, which nobody knows about) and even Ruby's IRB ("interactive ruby") is no match.
Being able to directly evaluate expressions without having to compile and run a debugger is a significant development time-saver. This is a particularly effective tool when you are writing tests.
Parentheses
A lot a people (like XKCD above) make fun of the Lisp parentheses. I think there are two considerations here:
Keeping parentheses matched while editing. In Clojure, this also includes {} and []. Using a good editor is key - see Top 5 IDEs and text editors for Clojure. For me, Emacs smartparens in strict mode (i.e. don't allow mismatches at all), plus some wrap/unwrap and slurp/barf keyboard bindings, all but solved this issue.
When reading Lisp code, I think parentheses get a bad rap. IMO, the confusion has more to do with the difference in basic Lisp flow control syntax than stacked parentheses. Here's a comparison of Ruby and Lisp
if syntax:
Ruby if
Ruby
1
2
3
4
5
ifvar.nil?
do_nil_stuff(var)
else
do_not_nil_stuff(var)
end
Lisp If
Lisp
1
2
3
(if(nil?var)
(do-nil-stuffvar)
(do-not-nil-stuffvar))
Once you get used to the differences, code is code. More relevantly, bad code is bad code no matter what language it is. This is important. Here's a good read on the subject: Effective Mental Models for Code and Systems ("the best code is like a good piece of writing").
Project Management
Leiningen ("automating Clojure projects without setting your hair on fire") is essentially like Java's Maven, Ruby's Rake, or Python's pip. It provides dependency management, plug-in support, applications/test runners, customized tasks, etc.
Coming from a Maven background,
lein configuration management and usage made perfect sense. The best thing I can say is that it always got the job done, and even more important, it never got in the way!
Getting Answers
I found the documentation (ClojureDocs) to be very good for two reasons:
Every function page has multiple examples, and some are quite extensive. You typically only need to see one or two good examples to understand how to use a function for your purposes. Having to read the actual function description is rarely needed.
Related functions. The "SEE ALSO" section provides links to functions that can usually improve your code:
if →
if-not,
if-let,
when,... This is very helpful when you're learning a new language.
I lurked around some of the community sites (below). The threads I read were respectful and members seemed eager to help.
On the whole, I was very pleased with the development experience. Solving problems with Clojure really didn't seem that much different from other languages. The extensive core language capabilities along with the robust ecosystem of libraries (The Clojure Toolbox) makes Clojure a pleasure to use.
I see a lot of potential for practical uses of Clojure technologies. For example, Clojurified Electron plus reagent-forms allowed me to build a cross-platform Electron desktop form application in just a couple of days.
I was only able to explore the tip of the Clojure iceberg. Based on this initial experience, I'm really looking forward to utilizing more of the language capabilities in the future.
FG vs OOP: B
My expectations for FP did not live up to what I was able to experience in this brief exploration. The lower grade reflects the fact that the Clojure projects I've worked on were not big enough to really take advantage of the benefits of FP described above.
This is a cautionary tale for selecting any technology to solve a problem. Even though you might choose a language/framework that advertises particular benefits (FP in this case), it doesn't necessarily mean that you'll be able to take advantage of those benefits.
This also highlights the silver bullet vs good design mindset mentioned earlier. To be honest, I somehow thought that Clojure/FP would magically solve problems for me. Of course, I was wrong!
I'm sure this grade will improve for future projects!
Macros: INC (incomplete)
I was also not able to fully exercise the use of macros like I wanted to. This was also related to the nature of the projects. I normally do DSL work with Ruby, but next time I'll be sure to try Clojure instead.
TL;DR
The rest of this article digs a little deeper into the differences between the Ruby and Clojure implementations of the JSON Processor project described below.
At the end of the day, the project includes two implementations of close to identical functionality that can be used for comparison. Both have:
Simple command line parsing and validation
File input/output
Content caching (memoization)
JSON parser and printer
Recursive object (hash/map) traversal
The Ruby version (~57 lines) is about half the size of Clojure (~110 lines). This is a small example, but it does point out that Ruby is a simpler language and that there is some overhead with the Clojure/FP programming style (see Pure Functions, below).
JSON Processor
The best way to learn a new language is to try to do something useful with it. I had a relatively simple Ruby script for processing JSON files. Reproducing its functionality in Clojure was my way of experiencing the Clojure/FP approach.
The Ruby version is in the ./ruby, while the Clojure version is in ./src/json_processor. See the README.md file for command line usage.
The processor is designed to simply detect a JSON key that begins with "include" and replace the include key/value pair with the contents of a file's (./<current_path>/value.json) top-level object. Having this include capability allows reuse of JSON objects and can improve management of large JSON files.
So, if two files exist:
JSON files
1
2
base.json
level1.json
and base.json contains:
base.json
1
2
3
4
{
"baseString":"zero",
"include":"level1"
}
And level1.json contains:
level1.json
JavaScript
1
2
3
4
5
6
{
"level1":{
"level1String":"string1",
"level1Float":45.67
}
}
After running base.json through the processor, the contents of the level1 object in the level1.json file will replace "include":"level1", with the result being:
result
1
2
3
4
5
{
"baseString":"zero",
"level1String":"string1",
"level1Float":45.67
}
Also, included files can contain other include files, so the implementation is a good example of a recursive algorithm.
There are example files in the ./test/resources directory that are slightly more complex and are used for the testing.
The passed-in object is purposely modified. The Ruby
each function is used to iterate over each key/value pair and replace the included content as needed. It deletes the "include" key/value pair and adds the JSON file content in its place. Again, the returned object is a modified version of the object passed to the function.
process_json.clj
40
41
42
43
44
45
46
47
48
49
50
(defn process-json
"Process a JSON object"
([base-dir obj]
(reduce-kv
(fn[mkv]
(if(is-a-map?v)
(assocmk(process-json base-dirv))
(if(s/starts-with?(namek)"include")
(mergem(get-include-content base-dirv))
(assocmkv))))
{}obj)))
The immutability of Clojure objects and use of reduce-kv means that an all key/value pairs need to be added to the 'init' (
m) collection (
(assocmkv) ). This was not necessary for the Ruby implementation.
A similar comparison, but with more complexity and detailed analysis, can be found here: FP vs. OO List Processing.
Pure Functions
You'll notice in the Ruby code that the class variable @dir_name, which is created in the constructor, is used to create the JSON file path:
To an OOP developer, having base_dir as a parameter in every function definition may seem redundant and wasteful. The Functional point-of-view is that:
Having mutable data (@dir_name) can be the source of unintended behaviors and bugs.
Pure functions will always produce the same result and have no side effects, no matter what the state of the application is.
These attributes improve reliability and allow more flexibility for future changes. This is one of the promises of FP.
Final Thought
I highly recommend giving Clojure a try!
Bad joke:
I have slurped the Clojure Kool-Aid and can now only spit good things about it.