These aren’t the devs you’re looking for

February 2, 2015

In the OpenMRS community, we often hear reference to “core devs.”  Who are these people?  What make them “core devs” anyway?

A little history...
OpenMRS started when Regenstrief Institute and Partners In Health (PIH) decided to collaborate while building systems for separate HIV-related projects (Regenstrief in Kenya, PIH in Rwanda).  Originally, Ben Wolfe and Darius Jazayeri were the primary developers for Regenstrief and PIH, respectively.  Both developers were primarily focused on coming up with a solution for their own organization, but collaborated on the same platform toward that end.  Several years later, Paul Biondich, leading Regenstrief’s Global Health Informatics team, arranged for Regenstrief to fund and sustain three developers (Daniel Kayiwa, Wyclif Luyima, and Rafał Korytkowski) to be covered full-time to focus on OpenMRS development.  Because Daniel, Wyclif, and Rafał are able to focus solely on OpenMRS development (other developers in the community were volunteering time or being paid to work on OpenMRS part-time), these three devs have often been referred to as “core devs.”

OpenMRS is used all over the world with more than 115,000 downloads across more than 200 countries.  As of February 2015, there are 1263 subscriptions to the OpenMRS Developers Mailing List.  The initial release of OpenMRS 1.9 was thanks to substantive contributions from more than 70 devs.  GitHub shows 930 forks and over 140 contributors to OpenMRS Core for openmrs-core.  As of the OpenMRS Implementers Meeting in Maputo (#MOZ15), we introduced Developer Stages, both to recognize & empower developers based on their level of expertise and community engagement and to adopt a more scalable approach.  If you listen to Yehuda Katz’s great discussion Indie OSS, it’s not healthy to make a distinction between “the core team” and community.  So, why are we referring to a few developers as “core devs” and labeling the 99.9% as non-core?

These aren’t the devs you’re looking for.Obi-Wan Kenobi (sort of)

As a community, we need to evolve beyond using distinctions like “core” vs. “non-core” developers toward a more scalable approach: Developer Stages.  So, if you see me (or anyone in the OpenMRS community who agrees with me), calling someone out in the future for using the term “core devs,” please understand it’s not for a lack of respect & appreciation for the awesome contributions from those who have been attributed as such; rather, it’s out of an appreciation for all the awesome developers around the world in past & future, including the “core devs,” who have contributed and will contribute to saving lives through coding for OpenMRS.

From now on when you feel the urge to say “core devs,” try substituting it with “/dev/5’s” or “available /dev/4’s & /dev/5’s.”  As a community, we will be working to make this attribution easier to see and understand, so, in the future, when someone refers to the “available /dev/5’s”, they’ll be referring to far more than a three people. 🙂

 

OpenMRS Developer Stages

December 15, 2014

During the OpenMRS Leadership Camp 2014, we talked about ways to empower and scale the OpenMRS Developer Community. While our approach to collaborative development has gotten us far, we don’t have a clear process for developers to grow in responsibilities.  The fact that OpenMRS has approximately the same number of developers doing code review or pushing to core as it had five years ago is a significant failure on our part.  We’ve been talking about ways to be more inclusive for a while, but haven’t put these desires into something actionable… until now.

With the help from several folks in the community, I was able to come up with a draft process for recognizing the stages of a developer within the OpenMRS community:

Stage Description
/dev/null

“Community”
Criteria
  • Be or desire to be a developer
Expectations
  • Can communicate well and show respect for others
  • Willing to be opened
Privileges
  • Chat with devs on IRC
  • Can become a /dev/1
  • Claim an intro ticket (or a non-intro ticket with assistance from a /dev/2+)
/dev/1

“Learning”
Criteria
  • OpenMRS ID
  • Development Environment
  • RTFM
  • Introduced
  • Claim ticket
  • Pull Request Accepted
  • Pass 5-10 question Introductory Quiz
Expectations
  • Has tackled at least one intro ticket
  • Can write a unit test
Privileges
  • GSoC
  • Post to dev list
  • Propose topic(s) on Dev Forum(s)
/dev/2

“Contributing”
Criteria
  • Helps others
  • Participate in Dev Forum(s)
  • Active ≥3 months
Expectations
  • Can handle low complexity tickets
  • Has tackled at least 10 tickets
  • Can create a module
  • Has pair programmed
Privileges
  • Claim low-to-moderate complexity tickets
  • Publish a module and resources to Maven repo
/dev/3

“Cooperating”
Criteria
  • Curate ticket(s)
  • Working with others
Expectations
  • Can handle moderate complexity tickets
  • Can function independently, yet looks for opportunities to pair program
  • Assisting with code review
Privileges
  • Code review
  • Configure CI
  • Lead Sprint
  • Push to module(s)
/dev/4

“Collaborating”
Criteria
  • Performed at least one Spike for the community
  • Leading Dev Forum(s)
  • Leading Sprints
  • Overseeing code reviews
  • Endorsed by implementer(s)
Expectations
  • Can handle complex tickets
  • Has publicly thanked at least 10 other devs
  • Finds effective ways for developers across organizations to work together
Privileges
  • Push to core
/dev/5

“Leading”
Criteria
  • Responsible for a component
  • Mentor
  • Engages with implementation(s)
Expectations
  • Leading development
  • Finds ways to make local implementation development benefit the community and community development benefit local implementations.
Privileges
  • Can establish coding conventions
  • Can deprecate services

The goals of defining a process like this are:

Our goal would be to fully automate the transition from /dev/null to /dev/1 – i.e., anyone in the community should be able to transition to /dev/1 without requiring manual review from anyone else in the community.  Realistically, transitioning through later stages of development would require some manual review, but our hope would be to keep things as objective as possible, so any developer would know what she needed to do in order to advance to the next stage of development.

Next steps:

While I could spend more time revising this blog post, it’s primary purpose is to share the work in progress, so I’m going to stop editing and let it go warts ‘n’ all. 🙂

You can see the actual OpenMRS Developer Stages wiki page here.

OpenMRS Timeline 2012-2015

December 5, 2014

I recently sent an email to the OpenMRS Implements Mailing List describing the evolution of OpenMRS between 2012 and 2015. While writing the email, I thought it might be easier to describe in an image:

OpenMRS Timeline 2012-2015

Our goal is to get to a UI-less platform (providing API & web services) used by an OpenMRS with a new and agile UI framework that meets or exceeds the needs of existing implementations. Currently, we have several implementations working in the 2.x UI, but the majority of implementations are still using the the 1.x UI. To achieve our goal, we will not only need to reach OpenMRS 2.3 with comparable or greater functionality than OpenMRS 1.9, but also find a way to ease the burden for implementations to migrate from 1.x to 2.x (e.g., migration tools, converting key modules to run in 2.x, possibly find a way to run most 1.x modules within the 2.x framework, etc.).

Informal Feedback in a Single Click

September 26, 2014

There are many ways to capture feedback from users & testers, from feedback buttons built into the app to issue trackers and tools like JIRA Capture. Another method we have been using, especially for upcoming releases or new features or widgets, within the OpenMRS community is a side-by-side feedback page.

feedback-page

The application is on the left and an etherpad on the right. While I am not suggesting this as an approach for issue tracking, but we have found it to be a quick & easy way of collecting community feedback.  The combination of a link taking them directly to the product to be tested along with the near-zero activation energy required by etherpad makes it a handy combination.  It’s also nice to be able to throw a brief intro into the etherpad to direct people on what to test and how to report feedback.  And lastly, there’s a nice side effect of people seeing each other’s activity in real time.  When combined with a developer responding to feedback and re-deploying fixes in real time, it can be incredibly powerful.

Anyway, the main reason I decided to blog on this is because I tweaked our side-by-side tool a bit and wanted to throw my one-page feedback HTML in here for the next time I need it.  Here it is:

<html>
<head>
<title>Feedback</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6/jquery.min.js"></script>
<style type="text/css">
.header {
	width: 100%;
	height: 20px;
	float: left;
	overflow: hidden;
}
#feedbackButton {
	position: relative;
	left: 1050px;
	text-decoration: none;
	font-weight: bold;
	cursor: pointer;
	font-family: arial;
}
#content {
	width: 1050px;
	height: 98%;
	float: left;
	overflow-y: auto;
	overflow-x: hidden;
}
#contentFrame {
	min-width: 1000px;
}
#feedback {
	width: 550px;
	height: 98%;
	float: right;
	overflow: hidden;
}
</style>
</head>
<body style="margin:0; padding:0; width:1600px">
<div class="header">
	<span id="feedbackButton"></span>
</div>
<div id="content">
  <iframe id="contentFrame" src="" width="100%" height="100%"></iframe>
</div>
<div id="feedback">
  <iframe id="feedbackFrame" src="" width="100%" height="100%"></iframe>
</div>
</body>
<script>
$(document).ready(function() {
	function getParam(name) {
		var value = new RegExp('[\?&]' + name +
			'=([^&#]*)').exec(window.location.href);
		return (value == null ? null : decodeURIComponent(value[1]) || 0 )
	}
	function config(conf) {
		$('title').text(conf.title)
		$('#contentFrame').attr('src', conf.left)
		$('#feedbackFrame').attr('src', conf.right)
	}
	function setButtonLabelAndPosition() {
		$('#feedbackButton').html($('body').scrollLeft() > 0 ?
			'move &rarr;' :'&larr; move')
		$('#feedbackButton').offset({top:0, left:Math.min(
			1050, $(window).width() + $(window).scrollLeft() - 75)})
	}
	$(window).resize(function() {
		if ($(window).width() >= 1600) {
			$('#feedbackButton').hide()
		} else {
			$('#feedbackButton').show()
			setButtonLabelAndPosition()
		}
	})
	$(window).scroll(setButtonLabelAndPosition)
	$('#feedbackButton').click(function() {
		$(window).scrollLeft($(window).scrollLeft() > 0 ? 0 : Math.max(
			550, 1600 - $(window).width()))
	})
	setButtonLabelAndPosition()
 
	/*
	 * Configure side-by-side feedback by providing the
	 * following properties:
	 *   title = Title of page
	 *   left = URL of site to be reviewed
	 *   right = URL of page (e.g., etherpad) for feedback
	 */
	config({
		title : getParam('title'),
		left  : getParam('left'),
		right : getParam('right')
	})
})
</script>
</html>

OpenMRS Data Model Browser

June 27, 2014

Ever since the beginnings of OpenMRS, we’ve used the data model as a reference and as a teaching tool.  As the number of tables has grown, it has become harder to keep the data model diagram updated.  I also wanted an easy way to search for tables, columns, or foreign keys.  So, I created dbtohtml to generate an easily browsable, standalone, singe-page HTML view of the data model.

@should do test-driven development

May 11, 2014

I recently described the @should taglet created by OpenMRS that helped the community adopt and sustain better testing practices.  Mário asked a good question about test-driven development (TDD):

I believe that BDD and TDD are very connected (except, when talking about integration tests). But I don’t see how [we can] use TDD if we need to create the @should tags first. Could you please clarify a little bit how that would work?Mário Areias

While I don’t think we’re doing much TDD in the OpenMRS Community at this point, it would be great to evolve this direction.  The real question is: will the @should tags that helped us start testing our code become an impediment to TDD?  I don’t think so.

Let’s try a simple example to see how we could be TDD-ish with @should tags.  Imagine that we want to be able to get the age in years of a person:

class Person {
  Integer getAge(Date onDate) {
    return 0; // TODO: return age
  }
}

Before we write any code, we describe the expected behavior.  To keep the example brief, I’ll just describe a couple expected behaviors:

class Person {
  /**
   * Returns person's age in years.
   * @should return null for date before birthdate
   * @should not round up age
   */
  Integer getAge(Date onDate) {
    return 0; // TODO: return age
  }
}

Next, we invoke the Behavior Test Generator plugin to automatically do the busy work of generating the skeleton for our unit tests.

class PersonTest {
  void getAge_shouldReturnNullForDateBeforeBirthdate() {
    // TODO: write unit test
  }
  void getAge_shouldNotRoundUpAge() {
    // TODO: write unit test
  }
}

So, now we can write our unit tests and see them fail, like any newborn tests in TDD would do.  Granted, in this example, you don’t technically start with the test code, but you can start with describing behavior (using @should tags) prior to writing code and using those tests to drive development.  So, yes, we start with @should tags; however, @should tags, can precede any actual code, since they are effectively shorthand for the tests we are writing before coding.

@should do behavior-driven testing

May 5, 2014

In 2008, when OpenMRS was struggling to adopt better test-driven development practices, I was lucky enough to read Dan North’s Introducing BDD.  As Dan says:

It suddenly occurred to me that people’s misunderstandings about TDD almost always came back to the word “test”.Dan North

How true!  For example, it’s common to see something like this when you start creating unit tests:

public class PatientTest {
  public void testPatient() {
    // test stuff here
  }
}

The next question is, what gets tested in a method called “testPatient”? I suppose the only wrong answer is “nothing.” But the problem is there are an infinite number of right answers… because “testPatient” doesn’t say anything about the behavior. As Dan points out, simply replacing the word “test” with the word “should” is a game changer. Let’s try again, except this time we will use “should” in our method name:

public class PatientTest {
  public void addIdentifier_shouldNotAddIdentifierThatIsInListAlready() {
    // make sure an identifier isn't duplicated
  }
}

It’s much easier to guess what will be tested inside that unit test’s method. That’s good… but it gets better. Dan’s suggestion of “should” not only places the focus on behavior, it also automagically forces testing to be scoped to a specific behavior, since any developer who sees a method name wrapping onto its third line instantly knows she is going about testing the wrong way and will look for help. Dan gives a great justification for this approach… but he had me at should.

Given Dan’s insight into using “should” instead of “test” to drive BDD, the trick was figuring out how we could engrain this approach within the OpenMRS community.  After some discussion, we came up with an idea that I’m still proud of today and I believe has helped us adopt a better testing culture.  Here’s what we did…

@should Javadoc tags

Testing is often filled with cookie-cutter code and requires additional effort that is difficult to sustain.  We wanted to find a way to overcome both of these challenges.  What we needed was a trivially easy way to generate behavior-focused tests.  So, we invented the @should Javadoc tag to allow developers to describe expected behaviors within the Javadoc and then we paid someone to develop an IDE plugin to auto-generate the test methods from existing method names.

Now that we have the @should tag, let’s take on more stab at testing.  Imagine you are writing some code for the Patient object…

public class Patient {
  public void addIdentifier(PatientIdentifier patientIdentifier) {
    // ...
  }
}

You know that an identifier shouldn’t be added twice for the same patient, so you simply state that behavior in the Javadoc:

public class Patient {
  /**
   * @should not add identifier that is in list already
   */
  public void addIdentifier(PatientIdentifier patientIdentifier) {
    // ...
  }
}

That’s it.  You’re already doing BDD!  Now, you tell your IDE to generate any missing unit tests for Patient and it automatically generates this method stub for you in the appropriate location:

public class PatientTest {
  public void addIdentifier_shouldNotAddIdentifierThatIsInListAlready) {
    // write your test here
  }
}

The IDE plugin automatically derives the proper location and method name from your @should tag and the associated method.  Now you can focus on testing that specific behavior without having to worry about any cookie-cutter code and adopting BDD is as simple as writing a Javadoc comment.

Benefits of using the @should Javadoc tag

Final Thoughts

We still have a long way to go down the road to full BDD, but I was very happy with our first step.  Over the years, the @should tag has become a handy tool for establishing a behavior-driven culture of testing; in fact, it has helped us adopt testing in general.  For any Java-shop that is wondering “How do we get our developers to start testing their code?”, I would strongly encourage you to read Dan North’s writings and consider adopting the @should Javadoc tag.

Related Resources

OpenMRS in Chinese!

April 30, 2014

openmrs-chineseVery cool… of course, it’s all Chinese too me.  Here’s Google’s translation back to English:

openmrs-chinese-englishThanks to Yang & team, Harsha, and all who contributed!

 

 

Freezing and Thawing Droplets in a DigitalOcean

April 21, 2014

DigitalOcean has been a game-changer for me.  Why create another space-hungry VM locally when you can spin up a new machine in 60 seconds on DO?  And it gets better: Tugboat.  Now, I can manage my droplets from the command line.  Since, I typically use (and reuse) droplets like I did local VMs, I often want to set a droplet aside for a while (maybe weeks or months) and return to it later.  Fortunately, DO provides a way to put snapshots into cold storage and then retrieve them later.  But freezing and thawing droplets wasn’t easy enough.  I made a suggestion to DO, but I doubt it’ll be implemented anytime soon (if ever), so I used Tugboat and some Groovy scripts to roll my own.

Here is what I was looking for:

Freezing a droplet

Thawing a droplet

The goal:

$ # Assume we have a droplet foo
$ tugboat create foo
$ # Imagine you're done working with foo for now
$ freeze foo
$ # Foo is a snapshot &amp; the droplet is destroyed.
$ # ... weeks pass and you have a hankering for foo ...
$ thaw foo
$ # Foo is back, Baby!

While it would be great to have freeze & thaw button on the DO website freeze & thaw parameters for Tugboat, I didn’t have the time to make a pull request for Tugboat… so here are the scripts:

~/bin/freeze

This script will snapshot a droplet, replacing any snapshot of the same name, and destroy the droplet.

#!/usr/bin/env groovy
 
class TugboatException {
	// Our very own little exception is born. It's a buoy!
}
 
def getImageInfo = {
	"tugboat images".execute().text.split("\n").find{
		it.startsWith(imageName+" ")
	}
}
 
def waitFor = { imageName, to='appear' -&gt; /* to='appear' or 'disappear' */
	attempts = 0
	while (true) {
		attempts++
		imageInfo = "tugboat images".execute().text.split("\n").find{
			it.startsWith(imageName+" ")
		}
		if ((imageInfo &amp;&amp; to=='appear') || (!imageInfo &amp;&amp; to=='disappear')) break
		if (imageInfo || attempts &gt; 20) {
			throw new TugboatException("$imageName did not $to within 3 min. Gave up.")
		}
		sleep(10000) // wait 10 seconds between checks
	}
}
 
def cmd = { description, command -&gt;
	print description
	response = command.execute().text
	println "done."
}
 
def cli = new CliBuilder(usage:'freeze DROPLET_NAME')
cli.q(longOpt:'quiet', '')
def options = cli.parse(args)
 
if (!options.arguments() || options.arguments().size != 1) {
	cli.usage()
	System.exit(0)
}
 
imageName = options.arguments()[0]
 
if (! "tugboat droplets".execute().text ==~ /(?ims).*^$imageName\s.*/) {
	println "$imageName droplet does not exist"
	System.exit(1)
}
 
imageInfo = getImageInfo()
if (imageInfo) {
	imageId = (imageInfo =~ /id:\s*(\d+)/)[0][1]
	if (imageId) {
		cmd("Destroying old $imageName image...", "tugboat destroy-image -c -i $imageId")
		waitFor(imageName, 'disappear')
	}
}
 
cmd('Telling droplet to halt...', "tugboat halt $imageName")
 
cmd('Waiting for droplet to shut down...', "tugboat wait $imageName -s off")
 
sleep(3000)
 
cmd('Taking snapshot of droplet...', "tugboat snapshot $imageName $imageName")
 
print "Waiting for image to complete..."
waitFor(imageName)
println "done."
 
cmd("Destroying $imageName droplet...", "tugboat destroy -c $imageName")

~/bin/thaw

This script will restore a frozen droplet and start it up for you.

#!/usr/bin/env groovy
 
def getImageInfo = {
	"tugboat images".execute().text.split("\n").find{
		it.startsWith(imageName+" ")
	}
}
 
def cmd = { description, command -&gt;
	print description
	response = command.execute().text
	println "done."
}
 
def cli = new CliBuilder(usage:'thaw IMAGE_NAME')
cli.q(longOpt:'quiet', '')
def options = cli.parse(args)
 
if (!options.arguments() || options.arguments().size != 1) {
	cli.usage()
	System.exit(0)
}
 
imageName = options.arguments()[0]
 
if ("tugboat droplets".execute().text ==~ /(?ims).*^$imageName\s.*/) {
	println "$imageName droplet already exists"
	System.exit(1)
}
 
imageInfo = getImageInfo()
if (!imageInfo) {
	println "Image $imageName not found"
	System.exit(2)
}
 
imageId = (imageInfo =~ /id:\s*(\d+)/)[0][1]
if (!imageId) {
	println "Unable to parse $imageName image id"
	System.exit(3)
}
 
print "Thawing image..."
response = "tugboat create $imageName -i $imageId".execute().text
println "done."
 
print "Waiting for droplet to start..."
response = "tugboat wait $imageName".execute().text
println "done."

Guns make a difference

April 4, 2014

Guns make a difference

There are now two definitions for insanity:

in·san·i·ty [in-san-i-tee]
  1. Doing the same thing over and over again and expecting different results.
  2. Believing  the solution to gun violence is more guns.