Punched Code Tape

It Was All Over After Punch Cards

Written By: Lawrence Waugh, Founder at Calavista

I don't feel old, but when I look around at other people in the IT industry, I realize I'm a dinosaur.

How old a dinosaur? My first programming job (the summer after my sophomore year at MIT) was writing COBOL code to cull trends from Polaroid's customer registration "database" - really just one big flat file. I have to shake my head at the sheer number of anachronisms in that one sentence.

The first computer I programmed, as a sophomore in High School, had 4K of RAM, and you loaded programs via punched tape. Half of that 4K was consumed by the BASIC interpreter, leaving you 2K to work with on your program. There was no swap space - there was no hard drive to swap with. So the 2K was what you had. You could be typing in a program (on a console that printed directly to a paper roll, not a CRT), and you'd actually get an OUT OF MEMORY response when your program got too big. Programs weren't stored in some representational form - they were stored as literal strings of characters and interpreted later - so code brevity really mattered. I remember shortening the output strings of a program to save a hundred bytes or so, so that the entire program could fit in memory. That sounds like a Dilbert cartoon, but it's true. Concise code became an art form for me.

But then I graduated to some serious computing power. The town's computer was housed in my school, so in Comp Sci 2, we got to use it, and its' awesome 8K of RAM. Of course, 2K reserved for the FORTRAN interpreter, but still essentially 3x the heap and stack space to write code. It also had a Winchester Fixed Drive, but that was some magical thing that we never got to play with. We had to submit our FORTRAN programs on punch cards, laboriously typed out on a punch card machine.

I learned a lot from that, believe it or not.

We would have one assignment per week. The "jobs" we submitted - a deck of cards wrapped in a rubber band, placed in a cardboard box in the server room - would be executed at night, after the town's work was done. The jobs included cards for the data, so that the computer would read the program, then read the data, and then do its thing. Mr. Hafner, a tall, bespectacled man, would come back after dinner, load a deck, and press the "execute" button. The computer would suck in the cards, then print out the results. Whatever they were. He'd then take that deck, wrap it in the output (that venerable fan-fold, wide, striped paper), and put it back in the box. We'd come in the next morning and rummage through the box for our output.

So that's exactly five attempts to get your program exactly correct, assuming you tried to run your program the same day it was assigned. More likely, you'd spend a few days writing the program before trying to execute it. So 2, or maybe 3, tries at best.

Now imagine coming in the day before a project is due, picking up your output, and seeing:

Syntax Error on line 18

D'oh! A day's effort lost, with no indication of whether or not - once the stray comma on line 18 was corrected- your program would even compile, let alone run correctly. It all depended on the next night's run. Last chance.

It didn't take many F's on assignments before you got very, very careful about your coding. Syntax errors were one thing- but algorithmic errors were another. The teacher, Mrs. Sheldon, had a set of data she'd feed your program. Running the program once or twice on trivial data wouldn't catch most of the errors. So you sat down and flowcharted things out. You thought up edge cases. You compared algorithms with your friends'. You shot holes in their ideas, and defended your own. You read, and re-read, your punched cards. You'd swap decks and read each others' work, in case your eyes might catch something your friends missed.

In short, because we only had a few tries to get it perfect, the cost of a mistake - whether design, implementation, or syntax - was grave. And because the cost was so grave, we killed ourselves to make sure we didn't make mistakes. As a result, we did the kind of design, peer review, and QA work that most development shops today would be proud of. We were barely teenagers, writing complex code, working on antiquated equipment, writing everything from scratch. But our code almost always compiled, and ran correctly, the first time.

When I was a senior, the high school got a CMS system, with some CRT terminals. We also got an Apple IIe. Now I could type my code in and run it, on the spot. I never touched a punch card again.

And my coding immediately got sloppier. I started typing before I'd finished thinking. I started trying to compile before I'd finished coding. But worst, I started coding before I'd really designed. Sometimes things just wouldn't work, and I'd have to start over. But the more insidious errors crept in when my code would almost work correctly. It would work in the obvious way, but I wouldn't have spent the time to think through the edge cases, or complex numbers as input, or just the unexpected.

Over the years, I've tried to discipline myself to not work that way. I've used PSP, enforced estimates, allocated specific time to design and peer review... but at the end of the day, my keyboard beckons. And it's hard to not want to start typing when you're excited about a project. "Why not just try this, and see if it works?"

And when I do that, I invariably write inferior code.

At Calavista, we have a process where the developer doesn't commit code directly. Instead, they create a regression test that demonstrates that their code works, and then tells our system that they're ready to "close" the issue. The system might check to see if another developer has diff'ed their changes in the past 24 hours. If so, it will take their new code, merge it with any recent changes to the code base, check that code into a temporary sandbox, build the product from source, create an installable, install it, and smoke-check the result.

Then, it will run the developer's new regression tests to ensure that what they set out to do actually works. Finally, it will run every regression test ever written as part of previous closes, to make sure all of those still work. Depending on the project, it may also run performance tests, or code coverage analysis, or any one of several other tests.

Only then, when all tests have passed- when everything has worked flawlessly- does the code get promoted into the main line (or root, or master branch, or whatever), and made available for manual testing. Which is a whole different story.

This process has worked incredibly well for making sure the code base is stable, and functional. Basically, the code is examined so carefully before it's checked in that every commit is a release candidate.

But when I think about it, that's pretty much what we were doing back when we had to work with punch cards. The cost of a mistake was grave- so we didn't make them. And maybe that's what we've lost in the intervening years as we've sped up and "improved" the development cycle.

So here's to punch cards. You taught me a lot. Rest in Peace.

Web App

Web App Jump Start Comparison: Generated UI

Written By: Steve Zagieboylo, Senior Architect at Calavista

This is the third in the author's blog series comparing two "jump start" tools for making Java-based Web Applications:

Both of these platforms create for you an application with a ton of functionality. There is a tremendous value just to starting with a completely working application, so you can actually get to working on your own code and not spending time struggling to get the boilerplate working. However, both of these start you with lots more.

JHipster, Cuba Framework Feature Chart

JHipster Landing Page with Admin Menu

Java- Admin

Cuba Platform Admin Menu

Cuba- Admin Menu

The default application from Cuba does not have a landing page, but jumps right into the first of the Admin pages (for the Admin user).

JHipster Application Metrics

JHipster Metrics

Below are the sections that were on the page. Some of these, like Ehcache Statistics are available only if that element was selected when the original application was generated. There were more options that I did not select, and I suspect that they would show up here, as well.

  • JVM Metrics
    • Memory
    • System
    • Garbage Collection 
  • HTTP Requests
  • Ehcache Statistics
  • Datasource Statistics

Cuba Platform Application Metrics

This has a similar set of views.

Cuba Metrics


JHipster Configuration View

This is a really helpful view if you have a number of different deployments with different configurations. Rather than having to go check the configuration settings for any particular instance, this view of the data is right there in the admin menu. In addition to the Spring Configuration, there is all the System Properties, the Environment Properties, and Application Configuration, pretty much everything that you use to manage the features of your system. The only downside to this view is that the values are not editable, here, but that would be too much to ask.

Cuba Configuration

Cuba Platform Dynamic Attribute Management

From their documentation: Dynamic attributes are additional entity attributes, that can be added without changing the database schema and restarting the application. Dynamic attributes are usually used to define new entity properties at deployment or production stage.

Cuba Attribute


Cuba Platform Scheduled Tasks Management

This keys off of the schedule annotations in Java, and it gives you live information and control over these tasks. Given how challenging it is to debug issues with these tasks, just having a little more control over them seems like a great thing.

Cuba Scheduled Tasks

JHipster Live Log Management

This is my favorite feature of JHipster. It automatically detects all the loggers that you have created, and it lets you change their log level on the fly.

JHipster Live Log Management

Web App

Web App Jump Start Comparison: Setup and Start

Written By: Steve Zagieboylo, Senior Architect at Calavista

This is the second in the author's blog series comparing two "jump start" tools for making Java-based Web Applications:

To the amusement of my colleagues at Calavista, I am constantly saying how much I hate computers. I don't, of course, but what I hate are how hard they are to do anything you haven't done before. Lots of tools have an overly-complicated setup process, and there's no reason for it other than the creator of the tool not paying enough attention to the new user getting started. I've abandoned more than one tool because a couple hours in, I still can't get it working. I always figure that if their setup is so filled with bugs, then the product probably is, too, so I don't feel any loss in giving up so easily.

Caveat: I have already used JHipster for several projects for different Calavista customers. So my setup and start was not exactly virgin, but I am presenting here as if I were. In fact, I had more troubles with this because I had more troubles with this because I had an older version already on my old system and I failed to uninstall it properly in my first try.


Overall Winner! It's a tie.

It is hard really to compare them. Cuba was simpler to get started, and it serves a very different purpose. It is intended for ongoing development rather than just getting started, but it offers fewer options for the final architecture. What it offers is great, if that's pretty much what you want. JHipster, on the other hand, is primarily a one-time tool with which you create your application and then you're on your own. It is incredibly powerful in what it creates, but all that power makes it a lot harder to get started, simply because there are so many choices you have to make.


Step One: Install-- Winner! Cuba Framework

Cuba Framework: If you already use IntelliJ, then the Cuba install couldn't be easier-- just point at a plug-in and go. Their older product had a separate IDE, which you used for everything except the Java editing, but now it is all integrated into the one plug-in. You create a new project with File/ New/ Project, as you would expect, answer a few questions, and you're ready to run. If you don't already have IntelliJ, then they have another option which I believe includes a free version of a scaled-down IntelliJ, but I haven't tried it.

JHipster: This is also quite easy, but not quite as simple as just adding a plug-in. They have a very clear 'Getting started' page with instructions to install Node.js and npm, and then to use them to install JHipster. This is where I ran into trouble, because I had forgotten that I had installed the old one with yarn, and my system was finding that one rather than the newer version, but that's on me. Once I got it straightened out, it all worked fine.


Step Two: Create and Run your Application -- Winner for ease of use, Cuba Framework. Winner for power, JHipster.

The next step was to create and to run the generated application. In both cases, I had it use the PostgreSQL database that I had already running. JHipster also has an option to use an embedded H2 database for development, which would have gotten around the next hurdle, but I had it using PostgreSQL for both development and deployment.

A not very artificial hurdle: Just to see how well they handled the misstep, I did not create the database user or schema that it was set up to use. (I remembered that I had made this mistake when I first tried out JHipster, a few years ago.)

Cuba Framework: This could not have been easier. The menu has acquired a new top level choice 'CUBA' (which I confirmed is only shown in an actual Cuba project, not in any of my other projects). On the menu is 'Start Application Server' which I selected. When it couldn't log in to the database, it told me clearly that the database wasn't available. Once I fixed that problem, it ran perfectly, giving me a link in the Console window to launch my browser pointing at the UI.

JHipster: This had a few hiccups, some of which are related to the additional power that is available. First, rather than just a new project in my IDE, it has a command line interface that walks me through a dozen choices (many of the same choices in Cuba's New Project dialog, such as root package name, database provider, etc.). There was a dizzying array of selections, but most had defaults that I know are good choices. These are a few of the options for which Cuba gave its one choice.

Web App Comparison
Spoiler Alert: Cuba wins for power in other arenas, specifically the implementation of the data model. If the UI Framework is not a deal breaker, the greatly expanded set of choices for Model implementations might cause the power-hungry to swing back. See future blogs in this series.
Once I had gotten through the application generation, Picture1npm reported some vulnerabilities, some of which were not trivially fixed. This is a little concerning.

After generation is complete, it finishes with instructions how to launch the server. 2I was able to point IntelliJ to the pom file and I was also able to launch the Spring Boot Application from there, but it was less obvious how to do it. (Since I already knew how, I'm not sure how much less obvious it was.)

JHipster did not do well, however, on the missing database test. It seemed to be running and I was able to bring up the UI, but then it gave a somewhat confusing error message, saying that authentication failed. It was actually referring to the server's authentication with the database, but that wasn't clear. At first I thought that I had just forgotten the admin password.

Application Comparison

Both tools create an impressive application, with a ton of functionality already working (such as User and Role management, Swagger UI, CRUD UI of all the data, and lots more). However, that is the subject of the next blog in this series.


Using Alba to Test ASP.Net Services

Written By: Jeremy Miller, Senior Architect at Calavista

One of our projects at Calavista right now is helping a client modernize and optimize a large .Net application, with the end goal of being everything running on .Net 5 and an order of magnitude improvement in system throughput. As part of the effort to upgrade the web services, I took on a task to replace this system's usage of IdentityServer3 with IdenityServer4, but still use the existing Marten-backed data storage for user membership information.

Great, but there's just one problem. I've never used IdentityServer4 before and it changed somewhat between the IdentityServer3 code I was trying to reverse engineer and its current model. I ended up getting through that work just fine. A key element of doing that was using the Alba library to create a test harness so I could iterate through configuration changes quickly by rerunning tests on the new IdentityServer4 project. It didn't start out this way, but Alba is essentially a wrapper around the ASP.Net TestServer and just acts as a utility to make it easier to write automated tests around the HTTP services in your web service projects.

I started two new .Net projects:

1. A new web service that hosts IdentityServer4 and is configured to use user membership information from our client's existing Marten/Postgresql database.

 2. A new xUnit.Net project to hold integration tests against the new IdentityServer4 web service.

Let's dive right into how I set up Alba and xUnit.Net as an automated test harness for our new IdentityServer4 service. If you start a new ASP.Net project with one of the built-in project templates, you'll get a Program file that's the main entry point for the application and a Startup class that has most of the system's bootstrapping configuration. The templates will generate this method that's used to configure the IHostBuilder for the application:

For more information on what role of the IHostBuilder is within your application, see .NET Generic Host in ASP.NET Core.

That's important, because that gives us the ability to stand up the application exactly as it's configured in an automated test harness. Switching to the new xUnit.Net test project, referenced my new web service project that will host IdentityServer4. Because spinning up your ASP.Net system can be relatively expensive, I only want to do that once and share the IHost between tests. That's a perfect usage for xUnit.Net's shared context support.

First, I make what will be the shared test fixture context class for the integration tests shown below:

The Alba SystemUnderTest wrapper is responsible for building the actual IHost object for your system, and does so using the in memory TestServer in place of Kestrel.

Just as a convenience, I like to create a base class for integration tests I tend to call Integration Context.

We're using Lamar as the underlying IoC container in this application, and I wanted to use Lamar-specific IoC diagnostics in the tests, so I expose the main Lamar container off the base class as just a convenience.

To finally turn to the tests, the very first thing to try with IdentityServer4 was just to hit the descriptive discovery endpoint just to see if the application was bootstrapping correctly and IdentityServer4 was functional at all. I started a new test class with this declaration:

Screen Shot 2021-05-11 at 11.20.45 AM

And then a new test just to exercise the discovery endpoint:

Screen Shot 2021-05-11 at 12.18.50 PM

The test above is pretty crude. All it does is try to hit the /.well-known/openid-configuration url in the application and see that it returns a 200 OK HTTP status code.

I tend to run tests while Im coding by using keyboard shortcuts. Most IDEs support some kind of "re-run the last test" keyboard shortcut. Using that, my preferred workflow is to run the test once, then assuming that the test is failing the first time, work in a tight cycle of making changes and constantly re-running the test(s). This turned out to be invaluable as it took me a couple iterations of code changes to correctly re-create the old IdentityServer3 configuration into the new IdentityServer4 configuration.

Moving on to doing a simple authentication, I wrote a test like this one to exercise the system with known credentials:

Now, this test took me several iterations to work through until I found exactly the right way to configure IdentityServer4 and adjusted our custom Marten backing identity store (IResource-OwnerPasswordValidator and IProfileService in IdentityServer4 world) until the tests pass. I found it extremely valuable to be able to debug right into the failing tests as I worked, and even needed to take advantage of JetBrains Rider's capability to debug through external code to understand how IdentityServer4 itself worked. I'm sure that I was able to get through this work much faster by iterating through tests as opposed to just trying to run the application and driving it through something like Postman or through the connected user interface.

hire professional

Hire The Professional

Written By: Lawrence Waugh, Founder at Calavista

"If you think it's expensive to hire a professional...
...wait until you hire an amateur."
- Red Adair *

I recently changed the radiator in my Jeep.

I did it myself, in part to save money, in part because it was a project my son and I could do together, and in part just because I wanted to. There's satisfaction in doing a task yourself, even if- sometimes especially if- it's not the kind of task you normally do.

And with a 20-year-old car that I paid $500 for- what's the worst that can happen?

On the other hand, there are tasks that I defer to the professionals on. Brain surgery, for one. Another is accounting. My business partner and I are smart guys- we could certainly figure out how to file our own corporate taxes if we wanted- but why? The cost of doing it wrong is huge. If you overpay, you're out thousands of dollars. If you underpay, you could go to jail. All in all, the cost of doing it right is not much compared to the cost of doing it wrong. So I use a professional.

Sometimes people choose amateurs to do a job that calls for a professional. I see this in software development all the time.

At Calavista, we recently spoke with a prospect (we'll call him "Mycroft") who had contacted us after his (outsourced) development team failed to make its 3rd consecutive delivery deadline. His team had been working for 9 months on an application, and though they'd done "great work" so far, they had not been able to actually release the project.

When we asked what "great work" meant, he said that the product demoed cleanly, looked good, and clearly worked- it just wasn't complete. There were a few minor bugs to work out, and he couldn't understand why it was taking so long.

Personally, my experience is that when a team cannot put the nail in the coffin and finish a project, it's usually because they've accumulated too much technical debt. That is they might find the quick, "demo-able" solution to some problem is X, while the real, "releasable" solution would be Y. Y is more complex and time-consuming than X, so they do X. Often the reasons for this are valid- e.g. the customer needs to see the functionality working so they can make decisions on other things- and sometimes the team is just lazy. Or ignorant. But regardless, the choice to do the simple thing is made again and again, and ultimately when the time comes to actually deliver, all those choices now have to be addressed. In some cases, there may be so many things to resolve that it's an effective re-architecture of the product, and "finishing things off" would really mean re-writing much of the code.

When we warned Mycroft of this, he pooh-poohed the idea ("I've seen it work!"), and indicated that the code was solid, and he didn't think it would take long (or cost much) for us to clean things up and ship his code. He made sure we knew that he was very experienced, having run many development groups, and having brought lots of products to market. He knew what things should cost in this industry, and (in so many words) put us on notice that he was nobody's fool.

During the course of this conversation, it became apparent that Mycroft had a team of 8 people, which he'd cobbled together, clearly with price as the driving factor. Now the warning bells were really going off. But we did agree to look at the code, so that we could give him an estimate.

The code was frankly shocking. Shortcuts were taken everywhere, crippling the application. Account passwords were stored in plain text, credit card CCID numbers were actually stored in the DB, and worst of all, there were huge vulnerabilities to SQL injection attack. Taken together, this meant that a savvy attacker could easily spoof the application into revealing all of the customer names, account info (including credit card numbers and CCIDs), and passwords. These problems- signs of quick and easy implementations in order to get functionality to demo- were systemic and ubiquitous. This was not an enterprise application. Security, scalability, performance...everything had been sacrificed to make a demo work.

The analogy we used was to a Hollywood set. His developers had built a false town. The bank's facade was complete, but inside, there was no safe- the money was lying around in piles. Same with the hotel, the saloon, the blacksmith's shop... You could walk an investor through the town on a guided tour and it would look good, but if you opened the doors to the public, it was all over.

The upshot is that Mycroft's company will need to start over completely.

In this case, choosing the cheapest option was a spectacularly bad decision. 6-figures of investment (even given the low hourly rate), down the drain. More importantly, 9 months of lost market opportunity.

I'm not actually a believer in "you get what you pay for"- I've seen large software firms charge exorbitant prices for middling work. But I do believe in hiring the professional. In this case, Mycroft's team clearly had no experience in producing applications that were enterprise-class, followed institutional guidelines on credit card security, and just observing commonly-accepted (and necessary) standardized coding practices. They wrote a piece of software that could impress an individual in a demo, but which could never actually be used.

There are all sorts of lessons learned here. "If it looks too good to be true...," "always interview your developers," "always perform code reviews," "build specific performance/security/scalability requirements in from the start," etc. But Red summed it up. Hiring an amateur can be the most expensive mistake you'll ever make.

*- Red Adair (en.wikipedia.org/wiki/Red_Adair) was a legendary oil well firefighter, whose company was hired to extinguish some of the most catastrophic oil well blazes in history: from Texas, to the North Sea, to Kuwait. He charged top dollar for his services, but his customers knew that no one would do the job better or faster.

Web App

Web App Jump Start Comparison

Written By: Steve Zagieboylo, Senior Architect at Calavista

Presentation1 (dragged)One thing Calavista does very well as a company is building Web Applications from scratch. If you have a great idea for an application and you plan to bet your life savings on building a company around it, a smart move would be to find a development organization with a on-time, within-budget record of greater than 90%, such as Calavista. One way that we achieve this record is by using some great tools to jump start the application.

Quick Comparison

JHipster is more of a one-and-done quickstart -- at least, that’s the way I’ve always used it.  Once we had the initial project with the data model basically defined, we did not use the tool any more.  JHipster does support such a mode, I understand, but it feels more awkward.  Cuba Platform, on the other hand, is completely designed to be used for the lifetime of the project.  There is an exit strategy if you find you are not happy with it, but clearly the intent is that you continue to use it.

Best Practices for a Java Web Application

There are a number of elements that are common too many applications -- web framework, security, build environment, etc.  It doesn’t make any sense to develop them from scratch each time.  Of course, every new project comes with its own different challenges, so we do not want some “one size fits all” framework.  What we need is a great starting point, but one that does not limit where we will eventually end up.  It should provide us exactly what we would have created if we had the time and skill to build it properly from the ground up, but do so in minutes rather than weeks.  All the pieces should use Industry Best Practices, which, for this purpose, I’ll define as follows.

  • Spring Boot Web Framework
  • Clean separation of layers: Data Model, API, Business logic
  • Authentication using Spring Security
  • Database access using JPA
  • REST services defined with JAX-RS annotations
  • Maven or Gradle build system, with proper project structure
  • Unit tests with at least 80% code coverage
  • Use a “best of breed” UI framework

Complete, Functioning Application

The big requirement for our Jump Start is that it should create a complete application, with some easy way to create our data model without having to construct all the pieces by hand.  There should be some meta level at which I describe the Data Model, and the Jump Start tool builds the pieces for me, from the UI to the DTOs to the database entities. 

Presentation1 (dragged) 2I can’t overstate the importance of a complete, functioning application. Consider every project where you spent the first few days, at least, just getting a basic “Hello, World” working. Of course, for a “Hello, World” in a web application, you need at least user identity, database connection, login, basic user management, a UI framework, and a REST framework. Building those from scratch is going to take a few days, at least. Building them with one of these quick starts is under an hour, and you get a lot more functionality besides.

There are a few tools which purport to give such a quick start to your application.  This set of blogs is going to focus on two of them which are very different in approach but both accomplish the goal admirably.  They are:

These other tools were also considered, but discarded for different reasons:

  • Django -- https://www.djangoproject.com/  This also looks promising, but it creates a Python-based application, rather than Java.  While this also is a viable alternative for web applications, we rather arbitrarily decided to limit our choices to Java for now.
  • Several no-code application builders:  bubble.io, kintone.com, journeyapps.com, all discarded because we are planning for an application that is something more than just a glorified CRUD application over a database.  We want the ability to get to the core code, once the basic application has been generated for us, to write custom business logic and to create custom UI.

We’ll consider these two platforms on these criteria:

    Presentation1 (dragged) 3

  • Installation and setup
  • Defining the Data Model
  • Creating and Running the Application
  • Built-in Functionality
  • Analysis of the Generated UI for the Entities
  • Analysis of the Generated Code
  • Analysis of the Generated Unit Tests


I have a lot of experience with JHipster, having used it successfully for three different projects for Calavista customers.  So I plan to spend more time with the Cuba Platform, because I am learning it for this exercise.  (That’s actually the reason for the exercise.)  But I promise to try to look at JHipster as if I were approaching that with equally unfamiliar eyes.

Continuous Deployment

Yes, Virginia, Continuous Deployment Does Have Controls and Approvals: Part 3

Written By: Daniel Kulvicki, Solutions Director at Calavista

In my last two blogs, I went over the specifics of Continuous Deployment and gave some examples of how you can enforce quality and controls even though you are releasing at breakneck speeds.  To finish off my series, we will dive into Dark Launches and Feature Toggling.  These mechanisms allow our teams to deploy software as fast as possible while reducing risk of bugs or issues to our broader customer base. 

Dark Launches

Let us start with the official definition of what it is to provide Dark Launches of software features. 

Dark launching is the process of releasing production-ready features to a subset of your users prior to a full release. This enables you to decouple deployment from release, get real user feedback, test for bugs, and assess infrastructure performance.

Although that is the official definition of Dark Launches, I like to have my own interpretation.  To provide releases as quickly as possible, you can also Dark Launch and not have any users test the new feature(s).  Some companies prefer this as it allows for faster releases and reduces risks of issues.  What you are essentially providing is a “turned off” feature (or features) in production.  There is some risk since you need to ensure even if the feature is turned off that it does not impact other turned-on features in production.  However, that means you are still deploying code at extremely fast speeds and minimizing risk due to being able to turn on the feature later and test it with whatever size audience you need!

Feature Toggle

Feature Toggle is one of the most popular architectures used for Continuous Deployment.  At the beginning, the main usage of feature toggles was to avoid conflict that can arise when merging changes in software at the last moment before release.  However, what quickly came to fruition was a structure by which to introduce new code quickly without causing breaking changes within an application.  Some organizations that adopted Feature Toggles soon realized that they were on the verge of Continuous Deployment.  As their release cadence became sound and fast, these companies were able to change their software product lifecycle to adapt to take advantage of this newfound speed. 

Feature Toggles do have some downsides that will need to be addressed up front for an organization to adopt this style of architecture.  If you are not careful, technical debt can occur since turning on features permanently can cause stagnant code.  However, if you manage this correctly; the speed you gain will outweigh the management of the debt. 


Well, hopefully I have shown you how one implementation of Continuous Delivery can help your organization be even more agile than it currently is with keeping the same level of quality your customers deserve.  I do understand that Continuous Delivery is not a great fit for every organization and sometimes the overhead that it brings can be a bit too much for smaller groups.  Most organizations do not even evaluate if it is a possibility.  Hopefully, this series will help you and your team determine how more Agile you can become! 


Software and Sourdough

Written By: Steve Zagieboylo, Senior Architect at Calavista

My non-computer hobby since quarantine started has been making sourdough bread. I created my own starter and I’ve been making bread almost every weekend since last March. I’ve gotten pretty good at it, such that commercial bread now is unacceptable to anyone in my family. (In other words, I’m not allowed to stop making it every single weekend.)

In a Calavista company meeting last week I joked that I was planning a blog on making sourdough bread. The company president then challenged me to write a blog comparing software development to bread making. So here goes…

Software and Sourdough: StarterTeamSoftware and Sourdough: Plan

Software and Sourdough: ToolsI’d like to point out just how successful our processes are, both my bread making and our software development.  At 90+% on time and within budget, Calavista exceeds the industry norm by a ridiculous margin.  My success with bread is only around 80%; I’ve had a few bricks, but I’ve learned from each one.  (For instance, using whole grain flour means you need more water in the dough, and more time fermenting.)  Comparing the two processes was a bit of a stretch, but I hope it was entertaining and gave a little insight into the software development process we follow in Calavista.

Marten DB With Jeremy Miller

Written & Presented By: Jeremy Miller, Senior Architect at Calavista

The 6 Figure Developer is a website designed to help others reach their potential. They're focused on providing developers tools, tips, and techniques to increase your technical skills, communication skills, and income.




Marten DB with Jeremy Miller


In this podcast, our Senior Architect, Jeremy Miller, is interviewed by The 6 Figure Developer podcast hosts John Callaway, Clayton Hunt and Jon Ash. Jeremy has a technical conversation about Marten DB, a .NET Transactional Document DB and Event Store on PostgreSQL. Tune in to find out more!

irritant removal system

What Can The IRS Do For You?

Written By: Andrew Fruhling, Chief Operating Officer at Calavista

As you start the new year and set your goals, I suggest looking into how the IRS can actually help you. I’ve seen many cases where it has improved the user experience for key customers, built product capability beyond your team’s capacity and accomplished so much moreAs we are at the beginning of “tax season” in the United States, I should probably be a little more explicit. I’m not talking about the Internal Revenue Service.  I don’t know anyone who has benefited from that IRS. One of our customers recently called us their “Irritant Removal System” (IRS) and, after a brief chuckle, we saw how this concept could be very beneficial to many companies. 


The Good, The Bad, and The Ugly

During my years of running product management teams at various software companies, I had my teams regularly conduct face-to-face meetings with our top customers. The purpose of the meetings was to talk through “The Good”, “The Bad”, and “The Ugly”. In other words, we wanted to hear directly from our top customers about what was working well with our product(s), what was not working so well, and what was driving the users crazy.   

I specifically organized the discussions in this order. Hearing about what was working well (“The Good”), gave the product managers anecdotes we then shared with other customers and prospects. We were also able to share “The Good” with the development teams, who too often only heard the negative and rarely heard about what was working well.   

Talking through what was not working well (“The Bad”) provided a good review of the top issues with the software for this customer.  Almost always the customer had already created trouble tickets for these items. Further discussions during these meetings simply allowed us to get a better understanding of the issues including insight into the real “Why for these specific issues. This provided a better view of the priorities for the top issues.   


The Golden Opportunity

For me, the real “Golden” opportunity was actually hearing about what was driving the end users crazy (“The Ugly”). Initially, customers didn’t understand the difference between “The Bad” and “The Ugly” in this context. The items that drive the end users crazy were irritants but not really bugs in the software. Yes, they caused issues with the user’s experience, but they were not deemed worthy of a trouble ticket. Many times, these items were relatively easy to address – perhaps something as simple as: “When I am doing this function on this screen, I need a couple of additional pieces of information from another screen.” Sometimes they were more complex.  Either way, these items negatively impacted the experience of the end users and needed to be addressed.   

Although these customer discussions provided a lot of good information, they often did not lead to the results we wanted. Once you have a list of irritants, what do you do with it? The development capacity is usually fully committed to new features (roadmap) and bug fixes (maintenance). Anyone who has actually run software development teams will probably tell you that these teams are always over committed on other items and the irritants had to wait unless you were willing to drop some roadmap items and/or maintenance items. In other words, only a very small portion (if any) of the irritants were ever addressed and the end users see no relief from “The Ugly”. 

Eliminate "The Ugly" with IRS

Unfortunately, I did identify a good solution for this while running these product management teams early in my career or even as I, later, managed very large software development organizations. It was always the same dilemma positioned around a tradeoff between the already planned roadmap and maintenance items versus the irritants. Unfortunately, the irritants rarely made it to the list!  

It was not until our customer labeled Calavista as their Irritant Removal System that I made the connection. As their IRS, we provide a cost-effective full team that delivers high quality code on schedule and can scale up or down as needed.  Let’s break that statement down. Cost-effective means that our costs are roughly the same and perhaps even less than typical internal development teams. Full team means the IRS includes more than just a couple of strong developers. While strong development talent is a good start, you need the full team including management oversight, architecture, testing, DevOps, and possibly even automation of the development processes along with proven methodologyHigh quality code seems self-explanatory but too often this is overlooked, and people believe they can “test in quality” at the end.  We believe delivering high quality code requires a focus on quality across the entire software delivery cycle from initial story points through construction, testing, and production rollout with a consistent, repeatable, proven processes and DevOps automation where possibleWe take great pride in our ability to successfully deliver projects “on schedule” for our customers at nearly 3x the industry average. Like high quality, this comes from many aspects of our approach and has been proven on our projects for nearly 20 years. Scale up and scale down provides the ability to address more irritants or less irritants as needed, based on business need. With this model, the IRS can address issues that are impacting your end user’s experience with the software and help you address other items that do not fit within your internal development capacity. So, ask yourself: “What can the IRS do for you?”