The Latest and Greatest in DevSecOps
At Calavista, we were interested in DevOps before there was an overall term for these best practices in software development. We were founded on concepts such as Continuous Integration and Continuous Delivery, pioneering Agile development. Since the early 2000s, the software development and DevOps scene has changed dramatically, exploding with new technology to help streamline production pipelines. It can be hard to keep up. Considering this, we’ve compiled a list of some of the most exciting and relevant recent developments in the DevSecOps scene.
Increased Adoption of GitOps
There are many different programs for development, but the rise in popularity of Git-based platforms for distribution and collaborative development has led to a new breed of DevOps: GitOps.
What is GitOps?
Git is a code management system used by many developers that allows multiple branches of code to be created and then merged seamlessly. It is used to manage multiple versions of source code edits, tracking changes and merging branches for an effective workflow. Incorporating these workflows into DevOps pipelines gives us GitOps.
How to Take Advantage of GitOps
GitOps falls under DevSecOps because of its relationship to Continuous Delivery. To take full advantage of GitOps, one needs to already have a plan for continuous delivery mechanisms in place. This way, when code is committed in the Git repository, it can be containerized (or built) and incorporated into the CI/CD process.
Argo is an example of a GitOps tool that can interact with Git and the CI/CD pipeline, comparing and synchronizing Kubernetes cluster states across locations and taking charge of continuous delivery. GitOps tools can further streamline development processes by making it easier to address issues within clusters without disrupting the environment.
GitHub Actions enables GitOps not only for Kubernetes, but for deployments as well. When new features are developed and pushed to GitHub, the Actions scripts are triggered to start the CI/CD progress. Since this happens automatically, there’s less work required to launch a new deployment and no risk of forgetting an important step. This approach also scales with multiple environments like QA and even PR (pull request) environments.
Blending Git with DevSecOps practices makes it easier than ever for software development companies to effectively manage, monitor, and integrate code throughout the development process. This makes GitOps a powerful addition to the DevSecOps world and one worth keeping an eye on.
Serverless Computing
DevSecOps is all about keeping developers focused on developing and streamlining their workflows, so it’s no wonder serverless computing has taken off in the world of DevSecOps lately.
What is Serverless Computing?
Taking advantage of cloud computing models, cloud providers can offer software development companies the opportunity to effectively go serverless.
Serverless computing is a cloud computing model that can dynamically scale the allocation of servers automatically and on demand, so users don't have to worry about server provisioning, availability, or maintenance.
In this way, serverless computing is not truly “serverless,” but from the developer’s perspective it might as well be. Serverless computing platforms like AWS Lambda or Azure Functions allow you to create and deploy code that doesn’t care what server it’s on. Hence, you create code that can be run 100 times, 1,000 times – or even more – in parallel, and you don’t have to worry about what hardware is needed. It’s all taken care of behind the scenes in the cloud.
The Benefits of Serverless Computing
As mentioned above, serverless computing allows developers to focus on writing code and deploying it to the cloud, with little thought about the underlying infrastructure. Not only does this free up developer brainpower, but it also makes it easier to integrate code in different environments, scaling well with DevOps environments.
Since serverless code can be run in parallel, it is highly scalable and cost-effective. Rather than paying for and running servers for the maximum-expected capacity, users only pay for the resources used while the code is running- resources that are dynamically adjusted to needs. Users are given just what they need, and If demand suddenly increases, the cloud provider can automatically adjust and keep things running smoothly. This not only saves money but takes infrastructure management out of the question for development teams.
The scalability, affordability, and efficiency of serverless computing warrants a shout-out for exciting new DevSecOps developments.
The Rise of DevSecOps
Finally we get to DevSecOps, a newer term that we’re embracing instead of just DevOps. Phishing and Cybersecurity attacks have been on the rise, so it makes sense that DevOps pioneers are starting to incorporate security into DevOps pipelines, creating the field of DevSecOps.
What is DevSecOps?
By integrating security into development and operations practices, organizations can build and deliver secure products efficiently and safely.
DevSecOps refers to incorporating security concerns into DevOps pipelines by automating security processes and building processes to quickly detect and respond to security threats.
Modern threats (such as supply chain vulnerabilities and ever-evolving threat actors) require modern solutions: integrating security into every step of the development process. DevOps already heavily relies on automation to streamline development, so it stands to reason that automation can be used to secure development processes as well. Automated testing like static application security testing (SAST) and software composition analysis (SCA) are two types of DevSecOps tools that can take your security to another level.
The Importance of DevSecOps
DevSecOps brings the convenience of DevOps to security while integrating security into the development process.
DevOps principles like automation allow software development companies to ensure applications are secure without sacrificing development time or hiring a large application security team. Static code analysis should be performed automatically with every code commit, allowing developers to have prompt security feedback on the code they write. The longer it takes for the vulnerability to be uncovered, the longer it will take to fix the issue.
With automatic feedback, developers have their recent task at front of mind and no additional features would have been built on top of that vulnerable code. This means more secure code will be baked into the product over time.
Securing Your DevOps Environment
DevOps, with its many integration points, can be vulnerable to attacks from different fronts. It is important to approach DevOps practices in a secure manner to make sure the process is continuously secure. With so many security tools and scanners on the market, even when development frameworks are secure by design, attackers can exploit a system by coming in the side door of the DevOps pipeline.
One way to secure DevOps is to implement secrets management, and another is configuring least-privilege permissions on cloud accounts and storing those credentials securely. These practices don’t help developers write more secure code, but ignoring them can leave even the most secure code completely vulnerable to various attacks. Therefore, they are a necessary component of a DevSecOps environment.
Conclusion
The increased adoption of GitOps, serverless computing, and the shift towards DevSecOps are some notable elements in the world of DevSecOps that we are excited about. However, these are just a few exciting improvements in the world of DevSecOps. Considering DevSecOps is almost synonymous with innovation, the field is always expanding. Keep up with latest best practices and development techniques by following our blogs, and if you have any questions about how to integrate some of these techniques or adopt other DevOps principles email us at info@calavista.com.
Embracing DevSecOps
Written By: Jeremy Polansky, CISSP, Director of DevSecOps at Calavista
Putting Security at the Heart of DevOps
At Calavista, we were doing DevOps before it had a buzz-word title. Likewise, we’ve been doing DevSecOps for quite some time, and it’s about time we start calling it what it is.
If you want to learn more about DevOps and how it can speed up the software development lifecycle, check out some of our previous blogs like DevOps Methodology Explained or DevOps Metrics. It’s important to recognize that the many ways DevOps makes software development easier, sometimes can also make security breaches easier.
Why put the "Sec" in DevOps?
Processes like Automation, Continuous Integration, and Continuous Delivery allow developers to develop and deploy their code rapidly, over and over again. It’s possible to run multiple environments in parallel and to leave a lot of work up to automation. This saves hours of manpower and cracks open a world of possibilities for developers, but the tradeoff is that it can also introduce bugs and security issues more rapidly as well.
If your security screening and testing is not at least as fast as your development process, then the advantages of DevOps may be outweighed by newfound security problems. This is why security must be integrated into DevOps practices.
The Demand for DevSecOps
Think about it. In recent years, companies have been deploying software faster than ever. The software development lifecycle (SDLC) has become agile and continuous, which releases vulnerabilities into production with more speed and ease than ever. In order to keep up with the pace, security must apply in an equally speedy, agile, and continuous manner.
Even more, code never runs by itself. It runs on a server, on the cloud, by processes defined by the business and under governance from the industry or government where the company lives. All these components contribute to risk and must be properly evaluated for security weaknesses and mapped to the production server, but as the software and SDLC moves quicker, security teams must scale similarly.
But separating out security teams from development teams is not terribly effective either. As separate entities, it is all too easy for them to fall out of step, for security to fall behind. Good security also requires some knowledge of development and the SDLC. By working together, security can be integrated through the development and operations processes.
As with development and QA, development and security should go hand-in-hand. That means modern application security must be approached differently than other areas of security. Now with DevOps and the modern SDLC, we see the demand for the dedicated field of DevSecOps.
Using DevSecOps to Enhance Security
Hopefully, at this point, the importance of integrating security into DevOps practices and pipelines is clear. Still, that doesn’t mean that it is clear how to actually go about doing that. To “do” DevSecOps involves integrating development and security, using DevOps technology to advance security screening and empower developers to code more securely.
Integrating Development and Security
As mentioned above, leaning into DevSecOps means breaking down the walls between security teams and development teams. Application security is tied into the development team in critical ways.
Here are a few ways that the product team practices security:
- The dev team mitigates identified vulnerabilities
- The dev team architects apps securely
- The dev team chooses which third party libraries to use
While these are all integration points for security on the development level, security is rarely the main focus of development or the people in development roles. Developers aren’t paid to mitigate vulnerabilities, their focus is on writing code – that is, shipping new features and addressing defects. Similar things can be said for architects, DevOps engineers, and product owners. That is why the security team needs to be part of the development process and work closely with these players.
Increasing communication and collaboration between DevSecOps or security engineers with the development team can help ensure that DevOps contributes to the security of your development process rather than detract from it.
Using DevOps Technology to Enhance Security
Above, we discussed how DevOps practices without proper monitoring can lead to more security problems than they’re worth. However, we can still use DevOps techniques to improve security.
Automation tools can help automate elements of security screening. Principles similar to unit tests can be applied to security screening, making sure that no code is deployed without automatically being checked for vulnerabilities and receiving the all-clear. This means one of the toughest parts of security – ensuring every system is accounted for and scanned – becomes much easier when paired with the DevOps approach.
Static Application Security Testing (SAST, or Static Scanners) can run within a traditional CI/CD pipeline to ensure all code is scanned. This approach applies to tools like SAST, container scanning, and software composition analysis for the software supply chain (SCA). Since DevOps practices typically enable multiple environments such as QA and staging, Dynamic Scanners (DAST) and even manual assessments always have a new deployment to test. Since these deployments are maintained as code, there’s assurance that the environment won’t have different vulnerabilities when it goes live into production.
Organizations have seen the need for securing their software supply chain with tools like SonarQube and GitHub’s Dependabot, and even more since the Log4Shell vulnerability. It’s imperative these checks are done in tandem with DevOps. With Software Composition Analysis (SCA) and open-source scanning built into the DevOps pipeline, developers get notified when there’s a package that has a weakness in it as soon as the code is committed. Then if there’s an issue, the developer can remedy it quickly. If these checks are done later on, it becomes more difficult to alter the third-party libraries which increases technical debt on the whole project. With a DevOps approach, technical debt is avoided, and supply-chain vulnerabilities are remediated quicker.
Conclusion
At Calavista, our thought leadership on best practices in software development have been a point of pride for over twenty years. Founded on DevOps tenets like Continuous Integration and Continuous Delivery, we have never stopped pushing the field of development. We were doing this before the term DevOps even existed, and we have been ensuring stable and secure development within those pipelines before anyone ever uttered the phrase “DevSecOps.”
The goal of DevOps is to keep developers developing as efficiently as possible, but with the rise of advanced cybersecurity threats, more attention needs to be paid to security within the development process. We don’t want each of our developers to become security experts, but we do want to deliver secured code. That’s why we are embracing the concept of DevSecOps, bringing security into the conversation, encouraging close collaboration between security, developers, and operations managers.
Lessons Learned From Austin Energy
Reflections on Our Latest Blackout
This one goes out to our Austin-Area colleagues. Around the office (and over the phone) we have been discussing the events of the past week or two – finally power is restored across town after a classic February cold-snap. It’s true that the ice storm we saw recently was unique, but it wasn’t totally unheard of. The storm we faced two years ago was much worse, overall in terms of placing strain on the state’s energy grid.
What made this storm different was not that the grid failed, but that the ice accumulating on trees caused limbs to crash to the ground, taking power lines with them. The many small fires (literally, in some cases) made it more difficult for energy providers to keep up with outages. Rather than having a large problem to focus on, they had to deal with each outage individually, manually repairing power lines and clearing downed trees. Often, only hours after restoring service to a line, another limb would come crashing down, taking out the newly restored line. So people who’d just had their power restored would once again lose it, and end up at the back of the queue for service.
This is not something that Austin Energy could simply prevent, and no one should really hold them accountable for the fact that these small outages occurred, but, as some of us sat in the dark on Tuesday, Wednesday, and Thursday, we realized there are some takeaways from the way they responded that we can apply to our own lives – and businesses.
1. Have Redundancy Systems in Place
If something is critical, you need to make sure that you have a backup. This applies to your hard drive as well as processes. In Austin Energy’s case, some could argue that with a real “grid,” one interrupted line should not mean a house loses power down the line. We are not electricians, so can’t say for certain how efficient redundant electrical lines would really be, but it certainly got us thinking about redundant systems in our own life.
On a personal level, some of us own generators or solar panels, home redundancy systems to ensure that if one line of power fails, our houses can still stay warm. We back up our computers to a hard drive and to the cloud, so that if the hard drive is destroyed or the cloud is hacked, it still exists somewhere. We recognize that it is important to not let one failure derail our entire system.
Redundancy in Software Development
This applies to many realms of life and industries – including software development. When building a development team, you don’t want to base it around one star player, you want a strong team that will survive even if a given individual suddenly quits or is unable to communicate without power. We see this time and again working with our customers. They say how critical it is to have their own people “own” some piece of their product, so they keep that piece of code to themselves. But then they put it in the hands of one or two people. This introduces enormous risk when one or both move, or change jobs, or (in one notable case) are bitten by a rattlesnake (he's fine now, but was out for several scary days). Great teams are resilient and continue to produce through adversity.
Redundancy is critical in other areas as well including data and connectivity. You cannot develop software effectively without these. As mentioned, it’s important to keep your data in multiple safe places as well. No one hard drive, or system, should be responsible for carrying the weight of your business or your code. You may use a RAID array – but what if the building burns down at night? There are several great tools and storage options available through cloud offerings for software development that makes this easy to address. Connectivity is much more challenging. By the same token, having teams in multiple areas reduces your risk as it is unlikely that every area would face connectivity issues at the same time. This is one reason that Calavista provides teams from all over the world.
2. Set Proper Expectations
As I said, it’s not Austin Energy’s fault per se that this storm happened and destroyed so many power lines, – we have to give them that. However, one area where they did fail was properly communicating to the public how long power would be out.
It is understandable that many small outages caused by physical damage would make it difficult to give a broad, accurate estimate of when power would come back on for everyone, but if you do not have enough information to make a judgement, it is important to be upfront about that, rather than throwing a dart at a wall. If people knew they would be out of power for days, they would behave differently than if they expected it to come on in just a few hours.
Setting Expectations as a Business
Likewise, we recognize how important it is to set proper expectations for development timelines and budgets. At Calavista, we have a thorough estimation process that we go through with every potential client. This requires more labor than most people put in upfront, but we believe it is the only way to give an accurate estimate and set realistic expectations. Sometimes, we’ll be told that our time estimates are higher than a competitor’s. That implies they either think that there’s less to be done than we do, or they agree on the amount of work to be done, but just think it can be done more quickly. When we ask for clarification, more often than not the customer tells us the competitor didn’t give them a breakdown of the work as we did – just a total time. And when those projects go to the competitor, they generally fail.
The point is, without a mature and realistic estimation process, you might as well throw a dart at the wall, in which case you will more than likely end up overpromising. Overpromising is how you set up your clients to be disappointed and to not trust you, undermining your relationship early on. It is important to not only gain clients, but to make them happy and keep them happy while you work together. Taking time to understand the situation, make detailed estimates, and clearly setting expectations in the beginning can have a massive payoff over the life of a project – and over the hopefully longer life of the client relationship.
As the work progresses, we continually provide visibility into the status and set expectations properly with the customer. No one likes surprises in business – you want to know what is coming and when so that you can plan accordingly. There is no value in delivering early if proper expectations are not set.
3. Communicate and Own Failures
This point is closely related to setting proper expectations but deserves its own segment for how difficult it can be sometimes. It will also be an important step if you fail to set proper expectations. For example, during the power outages this past week, many Austinites were told that their power would be turned back on soon, then nothing. Then again, a message that it would be on soon, with nothing. This leaves customers frustrated and distrusting.
How to Successfully Fail as a Business
If you fail, customers are going to be frustrated, period. If you get out in front of it, own it, apologize for it, you may be able to avoid losing their trust. If you notice you are running behind, burning through the budget faster than expected, or are on track to disappoint in any other regard, it is best to get ahead of it and notify your customer. Most customers, if you actually ask, will say something like, “I can live with a delay. I can’t live with not knowing about the delay until it’s here.” Try to work together to understand or explain how this happened and hopefully this will allow you to point to ways to avoid it happening again. It’s amazing the impact that owning a failure, and then inviting the customer into the review and lessons learned process as part of the team, can have on a relationship.
Conclusion
At the end of the day, we are not electrical or civil engineers, and don’t understand quite what goes into maintaining a complex energy network for one of the biggest cities in the nation. However, we do understand how to be resilient to unforeseen challenges. Having a resilient business model means many things. It means being able to scale and tackle challenges on the business side of things-declining business, recession, inflation, what have you- but also in the real world. We are not untouchable by disasters like epidemics, storms, power outages, yet Calavista’s operational model means we suffer relatively little when touched.
We have taken great care to build a model that is robust, redundant, and resilient. One, where, even if we cannot make it into the office, cannot safely meet face to face, our business is not affected. When COVID burst upon us, an email went out advising people to not come to the office for awhile. That’s all it took – the infrastructure to support remote work had been in place for years and was regularly exercised. Likewise, when some team members were taken off-line when their country was invaded, others in different areas seamlessly picked up the slack. When a large hosting provider suffered an outage, we shifted easily to another. Work can stay on track and keep being done through many challenges, even disasters. For work to grind to a halt at Calavista, means something incredibly serious, and totally unexpected, would have to occur. Maybe a meteor. You can’t plan around everything.
DevOps Metrics: Key Elements of Continuous Monitoring
Written By: Daniel Kulvicki, Solutions Director at Calavista
Our previous blogs have defined DevOps as a collaborative culture with its own defined practices, ideas, tools, technology, processes, and metrics. Integrating some of these elements into your workflow can help streamline and improve your development process. Today, I want to focus on metrics that we associate with DevOps. Read on to learn more about the different types of DevOps metrics, what they are, and how they can add to your development pipeline.
What Are DevOps Metrics?
If you drive a car, you will see several gauges (such as speed and fuel level) and possibly warning lights (such as Check Engine and Low Tire Pressure) on the dashboard that provide key information about how the car is performing. DevOps metrics provide the same visibility for your development and operational processes. Adopting DevOps processes means adopting a culture of collaboration and drive for improvement in your software development. In previous blogs, we split DevOps into six main pillars: Collaboration, Automation, Continuous Integration, Continuous Testing, Continuous Delivery, and Continuous Monitoring. DevOps metrics are a key element of Continuous Monitoring which provide key insights for many of the other pillars.
DevOps metrics are measurements and indicators that can be used as part of Continuous Monitoring to not just assess application performance upon completion, but to assess the efficiency of the development process as well.
Using DevOps metrics as part of continuous monitoring enables proactive tracking, analysis of data, and insight to potential automation steps. Metrics and key performance indicators (KPIs) are introduced for continuous monitoring to enable more insight into how the production code is both developed and running.
One thing that makes DevOps metrics stand out is that they are about the development process, not just the outcome. Many companies may rely largely – or even solely – on performance metrics, measurements and indicators of how the software functions once it is complete. But those who are familiar with DevOps know that there is much more to software development than the final product. Like the gauges on the car dashboard, these metrics are about how things are running – not where you will be when you get there.
Metrics like the ones below are more process-oriented, rather than outcome oriented, which can help assess the efficiency and effectiveness of software development pipelines or processes, providing insights to your DevOps operation. Not only that, but they also help ensure that your product is on the right track from the start, and that mistakes or inefficiencies are caught early.
Types of DevOps Metrics
Code Coverage Metrics
You can think of code coverage metrics as those that look at the amount of code covered by test automation tools like SonarQube or Clover. They’re kind of like metrics for your metrics tools – whereas test tools will tell you what percent of the tests you have are passing or failing, code coverage tools will tell you what percent of your code is actually being tested by those tests. This gives you an idea of how in-depth DevOps metrics can be, providing data on the way that you track and manage data.
For example, code coverage metrics may reveal that a chunk of your code is not being automatically covered, telling you that any tests done by those automation tools are not relevant to this portion of the code. Having a 100% pass rate may sound great, until you realize that you’re only testing 4% of your code base. Code coverage is one of the most common KPIs and is a great place to start if you want to start putting numbers to your development process.
DevOps & Quality Metrics
Development and quality metrics will give you an idea of how effective your development process is. This includes things like velocity, deployment frequency, change volume, failed deployment rate, and more. For example, velocity, one of the most common and essential development metrics, shows the average quantity of completed story points over previous sprints. It is both easy to measure and to understand and can be used to identify inefficiencies in your process, as well as make projections about forthcoming sprints. If you notice velocity slows down, you know that you need to take a closer look at your processes.
An example of a quality-related metric is Defect Injection Rate, which represents the number of defects that were discovered and reported during a particular phase of development. This can tell you about what iteration of product development your team has the most trouble with over projects, but it does not tell you the rate at which the errors are fixed. Mean Time To Recovery (MTTR), on the other hand, reflects your ability to respond appropriately to issues by tracking how much time elapses between issue identifications and resolutions. This is an example of a metric that takes continuous monitoring one step further; it is one thing to effectively detect errors, it is another thing entirely to address them rapidly.
Using development and quality metrics ensures that quality is baked into the product from the beginning through a sound development process. They allow you to keep tabs on your development team and give numbers and data to the entire process.
Application Performance Metrics
Application performance metrics are probably some of the most common metrics used in the industry, even by groups that don’t use DevOps practices. These types of metrics are more outcome oriented, measuring the way that software functions once it is “up and running.” Some examples include average response time, error rates, and application availability, among others.
Error rate is a well-known example, providing a measurement of the velocity at which errors are occurring in the system. This can be used to measure general performance once the application is launched, or you can use it to proactively diagnose larger failures, by running it in specific functional areas of an application as they are being built out.
Database Metrics
If you need to get an idea of how your databases are functioning, turn to database metrics like memory utilization, throughput, and more. These provide insight into any messiness within your databases which can be the cause of servers slowing down or crashing. Firstly, database throughput will tell you how much work is done by your database server per second, showing you how quickly it is able to process incoming queries. You can understand how long it takes for queries to be processed, or get a response, by measuring how effective your queries are. It should help point you to needed indexes or specific problem areas with your query.
Keeping track of database metrics such as these allows you to stay on top of database performance. A choked-up database makes it very difficult to get anything done or for anything to run effectively. Database metrics help you keep on track of your development process and identify potential bottlenecks.
Security & Vulnerability Metrics
Security and vulnerability metrics can tell you if there are any weaknesses in your current security systems and policies. They can be used to determine general susceptibility or specifically where the vulnerabilities are. When operating under a DevOps framework, there is a lot of collaboration, so keeping a close eye on security is of the utmost importance.
One example is static application security testing (SAST), which can automatically scan and identify vulnerabilities and flaws within the source code. On the other hand, dynamic application security testing (DAST) is a type of test that will examine applications for vulnerabilities that might not be visible in the source code once the application has been deployed.
These types of metrics can be measured by programs like Splunk, Intruder.io, or other security information and event management (SIEM) tools.
Setting Which Metrics to Use
Using metrics throughout the entire development process falls under the concept of Continuous Monitoring. This really does mean continuous, from development to app completion. This automated process provides real-time metrics like the ones discussed above to help you understand how your development process is working, how secure it is, and how effective the software it creates is.
There are even more metrics that could be used than the ones discussed in this blog. To utilize all of them may be too overwhelming or simply unnecessary depending on your goals. When deciding which metrics to incorporate into your DevOps pipeline, it is important to consider what information will be helpful to you. A metric means nothing if the information will not be utilized.
If you know you would like some help building out your DevOps pipeline and incorporating new metrics and monitoring strategies, email us at info@calavista.com. You can also subscribe to our mailing list at the bottom of the page to learn more about DevOps and software.
Continuous Monitoring: What Is It, And How Does It Impact DevOps Today?
Written By: Daniel Kulvicki, Solutions Director at Calavista
DevOps has made it possible for organizations to develop and release stable applications faster than ever. However, an organization with a proper DevOps pipeline should always include Continuous Monitoring through the development lifecycle. Continuous Monitoring (CM) is a fully automated process that provides real-time data in all stages of an organization’s DevOps pipeline. This data interprets any security risks or compliance issues before the application gets to the production environment.
This article will explain what Continuous Monitoring is and how it impacts DevOps today. Let us dive in!
What is Continuous Monitoring?
Continuous Monitoring is one of the most critical processes of DevOps. It is an automated process that allows software development organizations to observe and detect security threats and compliance issues throughout the development lifecycle. CM also provides automated metric reporting to measure the application’s performance and track the user experience trends.
Continuous Monitoring is crucial to all the stages of software development. It enhances smooth collaboration between the development team, Quality Assurance, and the business functionality teams.
For example, the DevOps team releases an application update; the customer service team will depend on Continuous Monitoring (CM) to identify any complaints from the end-user. The development team will automatically address these complaints quickly. Without Continuous Monitoring processes in place, an organization is usually blind to negative customer sentiment.
In simpler terms, CM provides feedback on errors, security threats, and compliance issues so that the teams can address or rectify these issues faster.
What is the importance of Continuous Monitoring in DevOps?
Continuous Monitoring delivers the visibility needed in order to help drive greater quality for the entire product. Organizations are now using DevOps to develop multiple applications simultaneously. This means that the developers upload their specific code to a central repository consistently. Continuous Monitoring minimizes the chances of incorrect code getting past the various testing environments. CM automatically detects and reports these errors so that the response team can address these issues in real-time.
Other than identifying and reporting errors, Continuous Monitoring comes with additional benefits:
Enhanced visibility and security
DevOps teams rely on automated processes to analyze data across all the stages of an organization’s DevOps pipeline. Continuous Integration and Continuous Delivery (CI/CD) are some of the most crucial steps of DevOps. However, these processes involve consistent changes to the code. CM ensures that erroneous code does not get to the production environment. It will detect these errors as soon as the developers integrate the code in the central code repository. The response team provides a solution to these errors as soon as they are detected through real-time reporting.
Continuous Monitoring allows the operations team to analyze data throughout the DevOps pipeline. This way, they can track any security threats and address them immediately. CM also ensures that the team does not miss any crucial incidents or trends.
Instant Feedback
CM involves a constant feedback loop. This feedback is essential to optimizing applications to meet the end-user needs. At the same time, the seniors at an organization can use this feedback to make informed decisions that align with the business goals. DevOps is about delivering rapidly without compromising the quality and functionality of the applications.
Real-Time Metrics Reporting
In a development setting, the teams work together to release multiple apps at the same time. However, without a proper continuous monitoring strategy, this can often pose a challenge. It is due to the rapid and frequent changes from different developers and the combined processes of DevOps methodology. It all needs to happen in a controlled environment with real-time reporting of metrics.
Continuous Monitoring tools provide automated reporting of metrics at each stage of the DevOps pipeline. You will need a tool that can look at the team’s productivity. It is also crucial to have a tool that can analyze your processes’ vulnerability and compliance issues.
Continuous Monitoring alerts the operator in any case of a broken code before the downtime occurs. In some cases, the operator can assign automated actions based on the organization’s risk analysis and DevOps strategy.
Enhanced Business Performance
Executives in an organization can use data from the CM processes to make time-efficient and cost-effective decisions. In addition, the business functions team can use the metric report to optimize the sales and marketing processes which will enhance the overall business performance.
For instance, the team can use the data to define the key performance indicators of the business. The organization can also benefit from continuous monitoring and produce a customized DevOps pipeline.
Better Automation
Automation is the backbone of DevOps processes, especially when it comes to metrics reporting. Automation is a necessity for all the stages of DevOps. Now, it becomes even more efficient when an organization integrates deployment automation with monitoring tools.
Not only does this provide better reporting, but it also enhances smooth collaboration between the developers and the operators. They do not need to go back and forth to analyze data and fix issues. Continuous monitoring automation alerts the operators whenever there is a bug in the development phase. The operations team will alert the response team and have the bugs fixed in real-time. This process reduces the chance for bugs to reach the production environment.
With automation, the team can also assign automated actions for repetitive tasks to allow a smooth feedback loop in all development phases. As a result, organizations adopt DevOps to allow faster and continuous delivery of high-quality applications.
Three Types of Continuous Monitoring in DevOps
Security threats and compliance issues are some of the challenges that software development organizations face today. However, a strategic continuous monitoring process allows DevOps teams to foresee these problems. In addition, CM helps organizations stop malicious attacks from outside, unauthorized access, or control failures. There are three different areas, or types, of Continuous Monitoring in DevOps that help organizations combat the security threats and compliance issues they’re faced with.
Infrastructure Monitoring
Good infrastructure enhances your application delivery. DevOps teams can use infrastructure monitoring to collect and analyze data to point out any disruptions or incidents that may occur. It includes monitoring the operating system, storage, user permissions, and the overall server status and health.
Network Monitoring
On the other hand, network monitoring looks at the performance, including server bandwidth, latency, and availability. As a result, the operations and QA teams can scale the organization’s resources and distribute the workloads evenly through continuous network monitoring.
Application Monitoring
Lastly, application monitoring analyzes and fixes performance issues. The team can rely on application monitoring to analyze app error rate, uptime, user experience, and system response.
Best Practices for Continuous Monitoring in DevOps
Continuous monitoring should be applied in all areas of the DevOps pipeline for accurate metrics and timely response. Below are the 4 best practices for continuous monitoring:
1. Define the organization's scope of Continuous Monitoring implementation
Like all the processes of DevOps, you will need to identify your scope for Continuous Monitoring implementation. This involves a thorough risk analysis to determine the processes that you will prioritize when implementing CM. For instance, if you are in the finance industry, you may want to analyze the security risks before settling on the processes to monitor.
To do this, you will need to collect as much information as possible about your DevOps Pipeline. Then, by analyzing this data, you can understand what the organization requires to perform at an optimal level.
Choose to monitor processes that will provide crucial feedback that will help you improve your environment to enhance your overall business performance.
2. Use metrics and historical data to determine trends in risk management
Analyzing historical data is an excellent way to decide what to monitor based on risk analysis. For instance, historical data reveals the security threats or compliance issues the company has faced in the past. This way, you can use the trends and apply continuous monitoring to the relevant processes accordingly.
3. Incorporate automation in all stages of developments
Once you identify the processes you want to automate, it is crucial to automate the monitoring process. Automating continuous monitoring leaves the team to focus on other essential tasks. Besides, it aids in risk mitigation as the operators are notified of any security threats that occur. The operator will then alert the response team to resolve these issues immediately.
As with automation, it is best to include continuous monitoring in all stages of the DevOps workflow.
4. Choose an appropriate monitoring tool
Getting the correct DevOps monitoring tool is crucial to successful and consistent tracking. Using the data collected, you can choose a monitoring tool that best suits your DevOps workflow. You should therefore outline your preferred functionalities for your monitoring tool.
An excellent monitoring tool should include reporting and diagnostic features. It should also have an easy-to-use dashboard, one that stakeholders, developers, and operations teams can learn quickly. Continuous monitoring is all about providing relevant data to help improve the DevOps workflow of an organization. Thus, your chosen tool should collect vast amounts of data. It should also include notifications to alert the admin immediately to a security risk, or compliance issue is arising throughout the DevOps pipeline.
Some companies prefer custom-built DevOps monitoring tools, while others will use third-party tools. However, the tools must align with the goals of the organization. In addition, companies should incorporate continuous monitoring in all stages of DevOps as identifying issues arising is crucial to fast and high-quality application delivery.
Conclusion
Now that you have a good idea on what Continuous Monitoring is and the benefits that it grants, our next blog will dive into the actual metrics that allow us to fully gauge the quality of the products we are delivering. If you have any detailed questions about Continuous Monitoring, I would be happy to answer them!
Big Data, Fast Data, and Machine Learning
Written By: Steve Zagieboylo, Senior Architect at Calavista
While it may seem I’m just trying to work in as many buzzwords as I can, in fact, there really is an important intersection of these three elements. I’ve been interested in both big data and fast data for several years, and my newest tech interest is machine learning. As I have learned about the latter, I see that there are problems that require all three to be truly effective. One application for which I’m looking at bringing together these technologies is in Recommender Systems for brick and mortar shops.
Big Data + Machine Learning ---> Recommender Systems
Probably the first big win for machine learning was Recommender Systems. You’re probably familiar with these in your online shopping, movie watching, or music selection activities, where the website suggests (usually pretty accurately) additional products, movies, or music that you would enjoy. There are a few algorithms for generating these recommendations, but one aspect of them all is that they need a lot of data to “train” the system.
In a shopping recommender, data is fed into the system as "features." These are measurable pieces of data that the developer thought would be relevant to the buyer biting on additional lures and adding more items to their cart. The developer doesn't have to know how those features are relevant; as long as he is feeding into the machine learning system a reasonable set of features, it will figure out which are significant and to what degree. The developer "trains" the system by feeding in known data with known results. The machine learning program tweaks all the parameters – how much it cares about each feature – until the known data tends to create the known results.
The synergy for big data and machine learning, therefore, is straightforward. There is a ton of data that is potentially relevant, and the developer will need a lot of known examples with all that data available before he can train the system. Since he isn't sure what data will make good features, he has to collect everything, even though some of it will turn out to be inconsequential. It can be surprising which pieces of information actually matter, but he won't find out until actually training the system with all that data in place.
Fast Data
Fast data systems also deal with a lot of data, but do it in real time. Typically, there is less variety of data types than what you would see in big data, but the quantity can be staggering, and the decisions it makes need to be timely.
Fast Data in Recommender Systems?
Returning to our example of Recommender Systems, there doesn't seem to be much need for fast data. Collecting the feature data for a single user to process through the trained system is not demanding enough to need fast data.
But now, let’s think about brick and mortar shops. Say you’d like to improve the up-sell opportunities to those customers, but to do it in a way that is not as annoying as the ubiquitous “would you like fries with that?” At first, it seems that you know very little about the customer. You don’t have a history like an online store maintains for its logged-in customers, and, unless your salespeople are extremely observant, you don’t have any information about what the customer looked at and considered when making a purchase.
Or do you?
Mobile phones today broadcast their GPS locations to all who would like to listen (or, more accurately, to all who would like to pay for the “anonymous” data). This data is accurate enough to know not just that the person is in your store, but where in the store he is. If you’re tracking this information, you can know how long the phone spent in each department, even how long was spent right in front of a particular product display. If you could feed this sort of information into a recommender system of the same sort your online store has, very likely you could prompt your salesperson to point out some specific items in the clearance section, or to offer a specific coupon.
However, the data does not come into the system in such easily digestible form, with locations and time spent, the sort of data you could feed to your recommender system. If the customer has connected to your free wi-fi, he is very likely sending a simple location 30 times a minute. When you multiply by all the customers in all your stores, it is a flood of data that is difficult to use for any sort of purchase recommendations. Your big data system could store all this, and could probably crunch it to be useful, maybe tonight, but that’s much too late to do any more than send the customer an email which will probably be ignored. If you can't act while the customer is standing there, you've lost the opportunity.
The piece that is missing is a fast data. A fast data system can process that stream of GPS locations and build useful information that becomes features in your machine-learning-based recommender system. I don’t know how each datum translates to sales, but that’s the beauty of machine learning – we don’t have to know. If you can generate data that is somehow – probably – relevant, and some known results with which to train the system, then it will do the rest. As you get more data and more results, both positive and negative, then with each retraining your recommender system will get better and better.
All three pieces are necessary for this complete system: You need the fast data processor to generate the data while it is still relevant. You need a big data system to hold all the inputs over time, what was recommended, and whether or not it succeeded in generating additional sales. And you need a recommender system based on machine learning to be able to improve your recommendations in the future.
Breaking an Application into Microservices
Written By: Steve Zagieboylo, Senior Architect at Calavista
I recently started a new greenfield project, where the decision was to use a microservices-based architecture. The application was pretty well defined, including most of the data model, since there was a working prototype, so my biggest first concern was how to break it up appropriately into microservices. I’ve participated in this process before, including making some of the mistakes I’ll try to warn you of, so I developed a way to think about the question that helps with the process.
Why Microservices?
There is plenty of documentation out there telling us why a microservices-based architecture is preferable for a large project, so I’ll just hit the highlights.
- Clean, well-defined interfaces make for better code. Sure, you could keep the interfaces this clean in a monolith, maybe if you were the only developer. But you can’t when there are tens or hundreds of developers trying to sneak around them, intentionally or otherwise.
- Separately deployable services allow for quicker turnaround of features and bug fixes. Honestly, I’ve never gotten this far in an application, including this one we just developed. In the early stages, there are still interdependencies, where one service needs certain info from another, so somebody has to create the API, somebody has to make the code that calls it, and it probably the same person doing both. But I truly believe that in a mature version of this product, those sorts of changes will diminish, and most small features will remain internal to a single service.
- Horizontal scaling becomes more efficient. I have seen this already. Some services have so little load that two instances is plenty, whereas some services scale up to a dozen instances. (We don’t drop below two so that a single failure doesn’t shut us down.)
How to Approach the Process
There are no hard and fast rules for how to break up your application into microservices, but these are the things that I’ve found are the key considerations.
Data Model
I tend to be a data-first architect, so, of course, this is where I start. Build a basic Entity-Relationship Diagram (ERD) of your data. This will naturally create some groupings, but it’s the rare data model that isn’t completely connected by relationships. Draw a circle around the major groups. This task is generally obvious, so the rest of this section will focus on how to resolve trickier cases.
Consider whether they are really Joins, or just References
Frequently, you’ll find that you have relationships which are many-to-one, where there is really no reason to connect them at a level through which you can perform a join. For instance, say you have a table of CELL_TOWER, with location, power, date_installed, etc. There is also a table of CELL_PHONE, with a separate table that connects the two. (Even though we assume a phone is only connected to one tower, you put this in a separate table because it changes so fast, and it might be an in-memory table.) You also have a table of TOWER_MAINTENANCE_RECORD.
The TOWER_MAINTENANCE table clearly is in a different microservice from the one CELL_PHONE is in. But which of these two services is CELL_TOWER in?
The answer lies in considering how you’re going to use this data, and where the real join is. Which is more likely: That you’re going to be presenting a list of towers, perhaps filtered or sorted by something to do with its maintenance records? Or that you’re going to present a list of towers filtered or sorted by something to do with the cell phones connected to it? Clearly, the first is far more likely. You might want to get a snapshot of all the phones currently attached to a particular tower, but you don’t need a join to do that. This operation is probably a drill-down from some list of towers, so you already have the tower information in your hand, including its ID. You just need to go to whatever service has the table that connects phones and towers and query against that. This query does want a join to CELL_PHONE, because you’re about to present a list of phones, probably with several columns. The query doesn’t pull anything from CELL_TOWER, though; it just needs the particular tower’s ID to filter against.
Rule of Thumb: When you’re going to be presenting a list of things with filtering and sorting, that’s one important place where you need a real join. The table with the items themselves must join to the tables that have the filtering and sorting parameters, so they have to be in the same microservice.
UI Interaction:
Ideally, any given screen of UI Interaction should interact with only one or two microservices. (Yes, I know we don’t have independent screens anymore, but you know what I mean.) For example, say you have one wrapper screen with everything about a PATIENT. Within it are tabs with different information about the patient: one with a list of VISITs and a drill-down on VISIT; another with a BILLING_AND_PAYMENTs; etc. Almost certainly, those two tabs are accessing two different microservices, and the wrapper with basic patient information (name, date of birth, …) is possibly a third. However, this does not violate my rule of thumb, because you can think of the different tabs as different “screens,” given the somewhat expanded definition of “screen.” In any case, as a rule of thumb, it’s pretty weak as it gets violated frequently. Its value is that it might tip the balance when you are unsure which service should hold a particular table, consider what the UI is interacting with.
The latest trend in microservices is to create Microservice UI, where the UI components that interact with a microservice live in that service and are only included as components in the larger framework. If, for instance, another column is added to one table in one service, then the UI to present and to edit that data is guaranteed to be in the same service. This targeted UI is included as a component, perhaps in several places in the application, but none of those places need to change. I admit that I’ve not used this approach in a real application, yet, but I’ve experimented with it. My experience was that I still needed to make changes in the wrapper to account for changes in the size and geography of the component. However, my experiment was not developed with a reactive UI, and most of the issues I saw would not have been problems if it were.
Rule of Thumb: Try to break up your microservices such that any given “screen” of the UI is interacting with no more than two services. When you do find yourself violating this rule, do so knowingly and have a good reason.
Rule of Thumb: Any UI component should interact with only a single service. Consider putting that UI code in the same repo as the back end code for that service.
Don't Overdo It
The most common mistake I’ve seen (and done myself) is to go overboard in making lots of nanoservices. (A term I thought I had just made up, but a quick Google search shows that I’m not the first one.) Just because this one table of PATIENT_COMMENTS is only accessed on a screen dedicated CRUD of those comments, and there are no real joins to it because it is just a drill-down from a Patient page doesn’t mean that needs its own microservice. Creating a microservice adds some overhead in mental load for understanding the overall application, that somewhat offsets the mental load of isolating a section cleanly. The net result should be to decrease the mental load, not increase it.
Test Driven Development
Written By: Jeremy Miller
In the DevOps Methodology Explained blog by Daniel Kulvicki, he introduced the notion of Continuous Testing and Test Automation as part of the overall DevOps methodology. This week, I am going to re-use one of my earlier blogs to kick-off a deeper dive into testing with a focus on Test Driven Development (TDD) and Behavior Driven Development (BDD).
Test Driven Development and Behavior Driven Development as software techniques have both been around for years, but confusion still abounds in the software industry. In the case of TDD, there’s also been widespread backlash from the very beginning. In this new series of blog posts I want to dive into what both TDD and BDD are, how they’re different (and you may say they aren’t), how we use these techniques on Calavista projects, and some thoughts about making their usage be more successful. Along the way, I’ll also talk about some other complementary “double D's” in software development like Domain Driven Development (DDD) and Responsibility Driven Development.
Test Driven Development
Test Driven Development (TDD) is a development practice where developers author code by first describing the intended functionality in small, automated tests, then writing the necessary code to make that test pass. TDD came out of the Extreme Programming (XP) process and movement in the late 90’s and early 00’s that sought to maximize rapid feedback mechanisms in the software development process.
As I hinted at in the introduction, the usage and effectiveness of Test Driven Development is extremely controversial. With just a bit of googling you’ll find both passionate advocates and equally passionate detractors. While I will not dispute that some folks will have had negative experiences or impressions of TDD, I still recommend using TDD. Moreover, we use TDD as a standard practice on our Calavista client engagements and I do as well in my personal open source development work.
As many folks have noted over the years, the word “Test” might be an unfortunate term because TDD at heart is a software design technique (BDD was partially a way to adjust the terminology and goals of the earlier TDD to focus more on the underlying goals by moving away from the word “Test”). I would urge you to approach TDD as a way to write better code and also as a way to continue to make your code better over time through refactoring (as I’ll discuss below).
Succeeding in software development is often a matter of having effective feedback mechanisms to let the team know what is and is not working. When used effectively, TDD can be very beneficial inside of a team’s larger software process first as a very rapid feedback cycle. Using TDD, developers continuously flow between testing and coding and get constant feedback about how their code is behaving as they work. It’s always valuable to start any task with the end in mind, and a TDD workflow makes a developer think about what successful completion of any coding task is before they implement that code.
Done well with adequately fine-grained tests, TDD can drastically reduce the amount of time developers have to spend debugging code. So yes, it can be time consuming to write all those unit tests but spending a lot of time hunting around in a debugger trying to troubleshoot code defects is pretty time consuming as well. In my experience, I’ve been better off writing unit tests against individual bits of a complex feature first before trying to troubleshoot problems in the entire subsystem.
Secondly, TDD is not efficient or effective without the type of code modularity that is also frequently helpful for code maintainability in general. Because of that, TDD is a forcing function to make developers focus and think through the modularity of their code upfront. Code that is modular provides developers more opportunities to constantly shift between writing focused unit tests and the code necessary to make those new tests pass. Code that isn’t modular will be very evident to a developer because it causes significant friction in their TDD workflow. At a bare minimum, adopting TDD should at least spur developers to closely consider decoupling business logic, rules, and workflow from infrastructural concerns like databases or web servers that are intrinsically harder to work with in automated unit tests. More on this in a later post on Domain Driven Development.
Lastly, when combined with the process of refactoring, TDD allows developers to incrementally evolve their code and learn as they go by creating a safety net of quickly running tests that preserve the intended functionality. This is important, because it’s just not always obvious upfront what the best way is to code a feature. Even if you really could code a feature with a perfect structure the first time through, there’s inevitably going to be some kind of requirements change or performance need that sooner or later will force you to change the structure of that “perfect” code.
Even if you do know the “perfect” way to structure the code, maybe you decide to use a simpler, but less performant way to code a feature in order to deliver that all important Minimum Viable Product (MVP) release. In the longer term, you may need to change your system’s original, simple internals to increase the performance and scalability. Having used TDD upfront, you might be able to do that optimization work with much less risk of introducing regression defects when backed up by the kind of fine-grained automated test coverage that TDD leaves behind. Moreover, the emphasis that TDD forces you to have on code modularity may also be beneficial in code optimization by allowing you to focus on discrete parts of the code.
Too much, or the wrong sort of modularity can of course be a complete disaster for performance, so don’t think that I’m trying to say that modularity is any kind of silver bullet.
As a design technique, TDD is mostly focused on fine grained details of the code and is complementary to other software design tools or techniques. By no means would TDD ever be the only software design technique or tool you’d use on a non-trivial software project. I’ve written a great deal about designing with and for testability over the years myself, but if you’re interested in learning more about strategies for designing testable code, I highly recommend Jim Shore’s "Testing Without Mocks" paper for a good start.
To clear up a common misconception, TDD is a continuous workflow, meaning that developers would be constantly switching between writing a single or just a few tests and writing the “real” code. TDD does not — or at least should not — mean that you have to specify all possible tests first, then write all the code. Combined with refactoring, TDD should help developers learn about and think through the code as they’re writing code.
So now let’s talk about the problems with TDD and the barriers that keep many developers and development teams from adopting or succeeding with TDD:
1. There can be a steep learning curve. Unit testing tools aren’t particularly hard to learn, but developers must be very mindful about how their code is going to be structured and organized to really make TDD work.
2. TDD requires a fair amount of discipline in your moment-to-moment approach, and it’s very easy to lose that under schedule pressure — and developers are pretty much always under some sort of schedule pressure.
3. The requirement for modularity in code can be problematic for some otherwise effective developers who aren’t used to coding in a series of discrete steps.
4. A common trap for development teams is writing the unit tests in such a way that the tests are tightly coupled to the implementation of the code. Unit testing that relies too heavily on mock objects is a common culprit behind this problem. In this all-too-common case, you’ll hear developers complain that the tests break too easily when they try to change the code. In that case, the tests are possibly doing more harm than good. The follow up post on BDD will try to address this issue.
5. Some development technologies or languages aren’t conducive to a TDD workflow. I purposely choose programming tools, libraries, and techniques with TDD usage in mind, but we rarely have complete control over our development environment.
You might ask, what about test coverage metrics? I’m personally not that concerned about test coverage numbers, don’t have any magic number you need to hit, and I think it’s very subjective anyway based on what kind of technology or code you’re writing anyway. My main thought about test coverage metrics are only somewhat informative in that the metrics can only tell you when you may have problems, but can never tell you that the actual test coverage is effective in any way. That being said, it’s relatively easy with the current development tooling to collect and publish test coverage metrics in your Continuous Integration builds, so there’s no reason not to track code coverage. In the end I think it’s more important for the development team to internalize the discipline to have effective test coverage on each and every push to source control than it is to have some kind of automated watchdog yelling at them. Lastly, as with all metrics, test coverage numbers are useless if the development team is knowingly gaming the test coverage numbers with worthless tests.
Does TDD have to be practiced in its pure “test first” form? Is it really any better than just writing the tests later? I wouldn’t say that you absolutely have to always do pure TDD. I frequently rough in code first, then when I have a clear idea of what I’m going to do, write the tests immediately after. The issue with a “test after” approach is that the test coverage is rarely as good as you’d get from a test-first approach, and you don’t get as much of the design benefits of TDD. Without some thought about how code is going to be tested upfront, my experience over the years is that you’ll often see much less modularity and worse code structure. For teams new to TDD I’d advise trying to work “pure” test first for a while, and then start to relax that standard later.
At the end of this, do I still believe in TDD after years of using it and years of development community backlash? I do, yes. My experience has been that code written in a TDD style is generally better structured and the codebase is more likely to be maintainable over time. I’ve also used TDD long enough to be well past the admittedly rough learning curve.
My personal approach has changed quite a bit over the years of course, with the biggest change being much more reliance on intermediate level integration tests and deemphasizing mock or stub objects, but that’s a longer conversation.
In my next post, I’ll finally talk about Behavior Driven Development, how it’s an evolution and I think a complement to TDD, and how we’ve been able to use BDD successfully at Calavista.
I Have a "Killer Idea" And Now I Need Software...
Written By: Andrew Fruhling, Chief Operating Officer at Calavista
Every day, a million and one thoughts fly around in our heads. Sometimes, they're killer ideas — the ones we think will make us never have to work again for the rest of our lives. Other times, they are just products of a very overactive imagination.
However, we know that it is one thing to dream and another to transform that idea into an actual business. Often, a vital part of this transformation is creating a Minimum Viable Product (MVP). This is a term Eric Ries made popular with his book, "The Lean Startup." He defines an MVP as “that version of a new product a team uses to collect the maximum amount of validated learning about customers with the least effort.”
In simple terms, a Minimum Viable Product is a product with just enough features to attract customers to validate the product idea as soon as possible. MVPs primarily help companies determine how well their product resonates with the target market before committing a larger budget to it. If you have a potentially killer idea, you should start by creating a Minimum Viable Product to validate the idea.
Approaches to Building an Minimum Viable Product
There are several approaches to building a minimum viable product and this is where things often get challenging – especially for people who do not have much software development experience. I have been in the software development industry for many years, and I have built teams using many different approaches. Some work better than others. People often ask, “How do I get my initial software built?” I thought it would be good to put my thoughts down in writing.
Let’s start with a disclaimer. I have been a customer of Calavista’s twice before joining the company in 2020. As you would expect, I am a fan of the model used by Calavista and I have seen it work well for many customer projects. With that said, I hope this following analysis provides a good overview on why I think this model works well.
For this analysis, I want to make some assumptions. The scope of a minimum viable product is usually constant – hence the term, minimum viable product. That leaves you with three primary levers to consider: Time, Quality, and Cost, plus of course, the overall Risk of the approach. Below, we explore the various approaches to building an MVP and what they would mean for your start-up. Each approach will be scored on a scale of 1 to 5 across Cost, Time, Quality, and Risk (where 5 is the best possible) and an Overall score will be assigned based on an average of the Cost, Time, Quality, and Risk score.
1. Build it Yourself (Overall score: 2.5/5.0)
There are a lot of low-code and no-code approaches to building simple MVP applications. These can work in some cases until your business complexities require a more sophisticated system but expect to have a significant rewrite at some point. I have personally recommended this type of approach to companies when cost is truly the most important criterion. This allows you to minimize your investment while you validate the idea.
a. Cost (5): This has the lowest possible cost as you are building it yourself. Many people take this route if they cannot afford to hire other people to do the work.
b. Time (1): There is often a significant time span and learning curve required to experiment with the capabilities of tools selected. You are often trading costs for time.
c. Quality (2): This element depends on you. If you're an expert developer, you would probably have an MVP with decent quality. However, for most people, this is like watching a DIY home improvement program where the homeowner does not even realize the impact on the quality until a professional shows them what should have been considered.
d. Risk (2): There is quite a lot of risk to this approach, as you often sacrifice quality and time because you want to cut costs.
2. Build your own Software Development Team (Overall score 2.0/5.0)
You could also decide to hire developers and build a team to work on your MVP. It is common for a startup to want to build their own team after all if you think you have a killer idea involving software, you think you want to have your own team to build it. Just like building a house, there are many skillsets required to build a software product – even an MVP. Building an initial MVP requires more than just a couple of developers. You will need expertise around architecture, requirements, testing, development tools & processes, and much more.
a. Cost (1): This has very high upfront costs. You typically need to hire multiple people with different skills to build a quality MVP. In today’s job market, these resources can be expensive and hard to find.
b. Time (1): It takes a lot longer than most people think to find good resources, identify the development tools and processes, and build the actual MVP.
c. Quality (4): You will likely get a quality MVP if you build the right team and processes first.
d. Risk (2): Your MVP’s success rests on your ability to hire and retain a team with the appropriate skill set, knowledge, and attitudes.
3. Outsource with Contract Coders (Overall score 1.8/5.0)
I see this often and have noticed that while it seems like an attractive approach, it usually does not end well. You can find coders online and pay them for the scope of your project. This may work well for small, straightforward projects that have a clear start and finish. Often, it grows to be multiple independent contractors who do not work together very well.
a. Cost (3): This is relatively low as you don't have to pay the overhead costs for hiring or management plus you can scale up/down as needed.
b. Time (2): It is usually swift to start, but you could soon begin to face issues with developers understanding the project’s scope, defining a holistic architecture, and delivering on time.
c. Quality (1): Despite good developers, the overall quality is often inferior due to interdependencies. In addition, a lack of focus on the overall architecture usually impacts quality.
d. Risk (1): This can be very risky as it might cost you time and quality in the long run as you end up with a disjointed product. Many times this leads to significant re-writes to improve performance, stability, maintainability, and user experience.
4. Outsource with Offshore Team (Overall score 2.8/5.0)
Many people talk about how you can outsource your development to companies in countries overseas such as India, China, Mexico, Belarus, etc. There are many cost-effective options but there are also many challenges.
a. Cost (5): Offshore development companies in India and China can provide development capacity at a meager price. On the other hand, locations like Eastern Europe and Central/South America are typically pricier but may offer better overall results.
b. Time (3): Depending on the processes of the company you outsourced your development to, the timing might be shorter. Often offshore companies can scale up a team much faster than you can hire, and the team will have recommendations for tools and processes. On the other hand, some offshore companies tend to have a “never-ending project” syndrome where the final deliverable seems to slip repeatedly.
c. Quality (2): This is where challenges often occur. Quality implies the product works as designed and as intended for the MVP. It is difficult to ensure quality remotely, especially if you are not familiar with quality development practices.
d. Risk (1): Selecting the right offshore contracting company is difficult and could be costly if you make the wrong choice. There are hundreds (possibly thousands) of options. They all claim to have industry leading best practices and the experience to make you successful. There is often miscommunication around what the MVP needs and its delivery. Between language, culture, and time zone differences, miscommunication is common, especially when you do not have experience working with offshore teams.
5. Outsource with Managed Offshore Team (Overall score 4.3/5.0)
A Managed Offshore Team means you have senior leadership in the US who provide the technical expertise for the project. They typically also offer the practical knowledge to best leverage offshore resources. The onshore management team will include the following:
- A senior development leader who would be like a VP of Development for most companies
- A senior architect who provides the technical expertise across the project
Based on the size of the project, these may be fractional resources. This means you’re getting both senior US-based leadership with offshore development costs. If done well, it could deliver the best of both worlds.
NOTE: Many offshore teams will have a US-based resource assigned as part of your project. In my experience, these are typically not the same as the “Managed Offshore” resources described here. Typically, offshore teams assign a US-based account management role to the project rather than an industry veteran with more than 20 years of experience running projects.
a. Cost (3): While not the lowest cost option, you can save a considerable amount on staffing by having a blended team.
b. Time (5): With this approach, you essentially hire a VP of Development who brings a development team ready for your project.
c. Quality (4): The quality usually depends on the repeatable and automated processes you have established. With a great process and collaboration, you often get the best quality.
d. Risk (5): A strong seasoned leadership team with repeatable and often automated best practices that leverages strong offshore development teams with cost-effective rates can significantly reduce your project risks.
Final Thoughts
While there are several approaches to creating an MVP, you must carefully choose the one that best suits you. There is not a single best answer for all cases, and you will need to determine which is best for you. The table below summarizes the scores for each approach:
If cost is your primary driver, you may want to consider a ‘Build It Yourself’ option or a good ‘Outsource with Offshore Company’ option. However, if you want to mitigate your risk, I recommend the ‘Outsource with Managed Offshore’ model that provides the best of both worlds.
At Calavista, we have been providing complete, managed teams that are specifically configured to address a customer’s needs and follow the ‘Outsource with Managed Offshore” model described above. Every engagement is led by a Solutions Director (SD) – professionals who have 20+ years of development experience and specific, demonstrated expertise in managing distributed teams. We use a hybrid, Hyper-Agile® development methodology that we’ve refined over the last two decades. These factors enable us to deliver projects with greater than 94% success rate – 3x the industry average. If you would like to talk about how this could work for you, please let us know!
DevOps Methodology Explained: Why is DevOps Right For Your Organization?
Written By: Daniel Kulvicki, Solutions Director at Calavista
In the last decade, we have seen significant shifts in software development operations. One of these shifts is the evolution of DevOps, which came to play in 2008/9. Even as organizations continue to adopt the practice, DevOps is still considered an extra when it needs to be a fundamental. In this article, we are going to explore DevOps and why every organization should adopt it.
- What is DevOps?
- How does DevOps work?
- Calavista Tenets of DevOps
What is DevOps?
For years, developers and most system operations teams worked separately. If you have ever worked in a software development company, you would know that these departments don't always agree. And this can always lead to fatal disagreements that delay the development and productivity of an organization.
DevOps is a term used to describe the collaborative effort between developers and the IT operations team to improve the application development and release process. DevOps is derived from ‘Developers’ and ‘Operations.’ It involves agile development methodology and Enterprise Service Management (ESM).
Agile development is a software development approach that focuses on collaboration between cross-functional teams to enhance rapid releases. This approach aims to help the development teams keep up with the fast-evolving market dynamics in software development. Like DevOps, changes in the agile development processes are incorporated continuously to produce a usable version of the application faster without compromising the quality of the output.
DevOps uses agile strategies, including collaboration and automation. However, it also includes the operations team who manage the application in production.
ESM applies IT system management such as system monitoring, automation, and configuration to improve performance, efficiency, and service delivery. This is the practice that brings the operations team to DevOps.
The developers would produce the code and hand it over to the operations team in the traditional software development process. The IT operators would generate or build the application from the code then proceed to test and production. In case of errors or when a client requests changes, the application would go back to the developers, and the cycle would go on.
DevOps changed all this through various practices, including Continuous Integration and Continuous Delivery (CI/CD). Continuous Integration allows developers to submit their code into a shared repository several times a day and throughout development. It is a cost-effective way to identify bugs via automation tools. This increases efficiency since the developers will fix the bugs at the earliest chance. However, it is essential to note that CI does not get rid of the bugs. Instead, it is a principle that aids developers in identifying bugs so they can fix them in a timely manner.
Continuous Delivery makes app or software deployment a lot more predictable. It ensures that the code remains in a deployable state even as developers work to introduce new features or make configuration changes. As with CI, Continuous Delivery enhances frequent app releases with limited instability and security issues. Thus, Continuous Delivery increases not only efficiency but also the quality of the application/software.
CI/CD go hand in hand. They allow developers to make code changes frequently and release them to the operations and quality assurance teams. Together, Continuous Integration and Continuous Delivery increase the rate of production and the quality of applications.
In DevOps, the team works together from development to deployment. This enables organizations to work on multiple projects simultaneously to produce high-quality applications and software.
How does DevOps work?
Developers and system operators differ on a lot of things. But, at the same time, the two teams must work together to successfully deliver software development changes to the end-user (developers write code, and operations gets the code to the end-user). Remember, customer satisfaction is the backbone of any software development organization and this is where DevOps comes in. While it's not easy, bringing both teams together will ease the development and rollout processes.
Let me explain how DevOps works with an example.
If we look at a typical software development team working on an application. The team includes software developers, software testers, DevOps engineer(s), and some other roles like scrum master and business analysts. The software developers write new code and make changes to existing code. The software testers ensure the code works as designed – often, this will include automated tests. The DevOps engineers make sure everything comes together and automates as much as possible. The DevOps engineers typically stand up the development tools and environments, implement a process to “build” the code that typically includes the automated testing, and provide metrics on the development process and the application. The primary goal is to produce an app that is stable and secure efficiently, and the DevOps engineers are a critical part of this team.
Software development can be challenging, especially because clients demand changes all the time. However, DevOps teams implement these changes faster through constant collaboration, communication, and automation.
At Calavista, we like to break DevOps into 6 different tenets. This helps identify various areas of focus for our clients.
Calavista Tenets of DevOps
If your company is about to adopt DevOps or simply looking for ways to improve your current development processes, you will need a solid strategy. It will involve bringing cross-functional teams together, which also means changing the work culture. So, how do you go about it? This section highlights the fundamental principles of DevOps.
When we talk about DevOps, we like to break it down into 6 key areas crucial to the success of your company’s development - deployment processes.
- Collaboration
- Automation
- Continuous Integration
- Continuous Testing / Test Automation
- Continuous Delivery / Continuous Deployment
- Continuous Monitoring
As with other many changes, adopting DevOps will require you to develop a repeatable set of steps for the team. What goes first, and what comes second? Your DevOps team needs chronological steps so they can work together.
For instance, in a typical situation, a simple DevOps pipeline would look like this:
Step 1: Developers write the code
Step 2: Both engineers and IT operators compile the code and check for errors
Step 3: The operation team enables testing and quality assurance to validate and verify the code
Step 4: Deployment - The code is moved to a staging and/or production environment
However, different organizations will have other DevOps pipelines. Therefore, it is essential to define these functions for DevOps methodology to succeed. This must also include a brief description of the automation tools you will use to develop and deploy an application.
Defining your DevOps pipeline allows smooth collaboration between the teams through the production cycle.
Collaboration
DevOps is built on the principles of collaboration between developers, testers, and system operators. The relationship between these teams will determine your production efficiency. Furthermore, the collaboration goes beyond this core team into your business stakeholders and others to ensure the team is properly building what is needed by the end users following an agile methodology. Helping everyone work together effectively will help you deliver great products.
This means that you might have to change your company culture. DevOps will not work for your organization if your developers and IT team don’t work collaboratively. Remember that it is a strategy that involves constant changes that must be validated in real-time.
DevOps is not only a developers-IT team affair. The stakeholders and management must also join the team so that everyone is on the same page. It is beneficial to both the organization and the team at large.
Good collaboration across the development and operations team and the broader organization is crucial to delivering outstanding software products.
Automation
Successful DevOps implementation relies heavily on automation. Therefore, it is essential to use the right test automation frameworks established in the right tool to automate extensive development and deployment pipeline sections. Automation is a combination of tools and scripting to implement a cohesive and efficient development environment and can include any part of the development, testing, and operations activities, including onboarding.
So, what is Automation in DevOps? It is the use of advanced tools to perform tasks that require minimal human intervention. However, automation does not get rid of the human functions in DevOps. Instead, it enhances the entire DevOps pipeline to allow quick releases. We will outline the benefits of automation in DevOps. But, first, let’s define the DevOps process that can be automated.
DevOps Processes to Automate
Ideally, you can automate all the processes of DevOps, but you usually do not have the time to automate everything. Your automation infrastructure will vary from that of another company based on your specific requirements. When thinking about automation, we recommend prioritizing the following processes:
- CI/CD
- Testing
- Continuous Monitoring
DevOps automation begins with code generation. Next, multiple developers will submit their code into the source code repository, requiring Continuous Integration (CI). At this stage, your automation tool should detect bugs and any inconsistencies in the code. This makes it easy for developers to fix the bugs as soon as the system identifies them. Automation also enhances Continuous Delivery (CD) by allowing frequent configuration of new features and other changes to the code. As a result, it is easier to keep the code in a deployable condition at all stages.
Accurate testing is crucial to software development. Automation tools run testing automatically and as frequently as the developers check the code into the repository. With automation, code testing runs throughout the software development cycles. Therefore, it is less likely for a company to release unstable or erroneous applications using automation.
We will look at these processes in the following sections of this article. All in all, automation is one of the crucial principles of DevOps. Below are some of the benefits of automation in DevOps:
- Reliability of the app
- Accuracy during code generation
- Enhances collaboration between the teams
- Automation also reduces the cost of production by reducing the number of staff needed for development, testing, and deployment
- It improves the overall quality of the app
Continuous Integration
Continuous Integration enables developers to submit and merge their individually written/modified code into the shared main code branch. For instance, once the product road map is laid out, code generation starts. In most cases, developers will begin with the most critical parts of the source code. Therefore, Continuous Integration requires that individual developers submit and merge their code into a shared repository several times a day.
These changes go through automated testing to identify bugs. Developers will fix any detected bugs as soon as possible to keep the code ready for the main codebase. This allows for smooth workflow and consistency through regular adjustments to the code to meet the set validation criteria. Continuous Integration is also an essential step towards Continuous Delivery, a process that we shall focus on later.
Why is CI so significant in DevOps?
DevOps is based on a set of strategies that fuel software development processes for speedy and incremental release. Continuous Integration allows multiple developers to write and merge their code into a shared codebase. This builds transparency and accuracy throughout the development lifecycle. It ensures that everyone is on the same page during the code generation stage of software development. It also promotes collaboration between the involved departments in an organization.
Through automated testing, Continuous Integration enhances the quality of the end-product. Errors and bugs are identified and fixed early enough before checking in the code into the main codebase.
Continuous Integration starts with building the most critical code to layout the product roadmap. This is followed by automated tests on the code in the shared repository before merging the code in the main codebase. Everyone on the team must update their code multiple times a day. Remember that CI is all about keeping the code in a deployable state at all times. Upon testing, developers should focus on fixing any bugs in the code as soon as they are detected.
Continuous Integration takes us to the following critical principle of DevOps; Continuous Testing / Test Automation.
Continuous Testing / Test Automation
Earlier, we talked about how critical automation is as a component of DevOps. Automation starts immediately after the developers start writing the code and run throughout the development lifecycle. Continuous Testing goes side by side with Continuous Integration. It involves continuously reviewing the minor changes integrated into the codebase to enhance the quality of the product.
Continuous Testing is an excellent way to attain a frequent feedback loop on the business risks at all stages of development. First, developers merge the code, and the quality assurance team takes over through test automation. Unlike traditional software development methodologies, the QA team doesn’t wait for developers to write all their code. For many companies, test cases are actually written before the code, in what is called Test Driven-Development (TDD) and the test cases will simply fail until the code is written. In all cases, testing needs to happen as soon as the code gets to the shared repository. At this stage, bugs and errors are detected, and developers can fix them immediately.
Continuous Testing puts customer satisfaction in the mind of the developers at the idea stage of development. The quality assurance team uses Test Automation to check for software stability, performance, and security threats when developers integrate changes into the standard repository. Thus, Continuous Testing enhances the quality of the application and speeds up the development process.
An organization will need to develop a Test Automation strategy when laying down a software development roadmap. It can include unit testing, UI testing, load testing, API Integration testing, among others. Test Automation plans vary from one organization to another depending on the DevOps pipeline and selected metrics.
Continuous Testing and Test Automation bridges the gap between Continuous Integration (CI) and Continuous Delivery (CD). Upon testing and fixing detected bugs, the operations team proceeds to Continuous Delivery - A process that ensures that the code remains in a deployable state throughout the development lifecycle.
Continuous Delivery / Continuous Deployment
Continuous Delivery is the logical next step after Continuous Integration and Continuous Testing and is an integral part of almost every Calavista project. Automating the delivery mechanism massively reduces delivery issues, delivery effort, and delivery time. However, please note that even though the delivery is automated, manual release mechanisms may still be in place for moving a release from one environment to another – especially customer acceptance and production environments. Continuous Deployment automates even these delivery steps. This process goes hand in hand with Continuous Integration. Minor changes are integrated into the central code repository in a deployable state throughout the development lifecycle upon testing the code.
In other terms, Continuous Delivery (CD) is a combination of the processes we have discussed, i.e., building the code, testing, and merging the code in the main codebase in short cycles. In this step, the software is only a push-button away from deployment (release to end-user). At this phase, the team reviews the code changes and verifies/validates the code changes to ensure that the application is ready for release. When the criteria is met, the code is pushed to a production environment.
The changes incorporated in the CI/CD phase are released to the customer in the Continuous Deployment phase. The code changes go through Test Automation to check for quality and stability before releasing them to the production environment. Continuous Deployment focuses on customer or End-user satisfaction. For instance, if a user makes a bug report, developers can make changes to the code. The changes will be automatically deployed upon passing an automated test. The deployment will fail if the newly written code does not pass the test. Therefore, Continuous Deployment reduces the inaccuracy from recently applied code and thereby maintains the software's stability. Continuous Deployment is the process that enables developers to add new features or updates to a live application.
Continuous Deployment checks the code for the stability of newly integrated code changes before releasing the software to the client or end-user via Test Automation.
Continuous Monitoring
Continuous Monitoring (CM) is usually the last phase of the DevOps pipeline. However, it is just as important as any other phase. The continuous model of operations means that code changes occur rapidly. As a result, Continuous Monitoring helps the DevOps team with the proper insight into what and how their system is operating.
So how does CM work?
DevOps engineers are required to help other teams support and maintain their application. Continuous Monitoring is put into place to enable support to be proactive instead of reactive. Metrics and KPIs are introduced for Continuous Monitoring to enable more visibility and insight into how the production code is both developed and running. In addition to metrics, centralized logging is usually put into place to expedite the diagnosis of issues. These tools bring together a way to monitor all aspects of an application in support of creating a better product.
Continuous Monitoring reduces app downtime. It is because the team is aware of the app's performance as well as threats on a continuous basis. Besides, bugs are detected and fixed in real-time, thereby enhancing customer satisfaction.
The primary goal of Continuous Monitoring is to ensure maximum performance of the app. This can only be achieved through responding to customer feedback, monitoring user behavior, and deploying changes as often as needed.
Conclusion
I hope this blog has helped you gain a better understanding of DevOps and how we break it out at Calavista. Hopefully on your next project you can reference our Tenets and see how better you can fit DevOps into your organization. Please feel free to reach out as we always love to talk DevOps!