5 Major Problems With Synchronous Code Reviews

If your teams does not use git as a source control system then you should continue reading. Because then you most likely use the synchronous code review type and that’s giving you 5 major problems.

But even if you do use git as a source control system then you still might use the synchronous code review type. If so, then you still have these 5 problems—even though you use git.

With synchronous review I mean a code review, that is performed together in front of the coder´s screen immediately after the coder finished coding.

In contrast, the asynchronous review is done by the reviewer on his own screen, on his own schedule. The reviewer uses some tools to write comments, that are then forwarded to the coder to improve the code.

If you want to know more details about the differences of code reviews I recently wrote a separate blog post about the 4 types of code reviews.

At the time of this writing my team uses Team Foundation Server (TFS) and its built-in source control TFSC (Team Foundation Source Control). Most of the time we also use synchronous code reviews and I found a couple of problems with that approach.

Ok, so what are the downsides of synchronous code reviews?

5 major problems with synchronous code reviews

Synchronous code reviews produce following problems:

  1. Direct dependencies
  2. Forces context-switching
  3. Not done consistently
  4. Time pressure
  5. Lack of focus

Let’s have a look at each of these issues in more detail.

1) Direct dependencies

A big problem for the synchronous code review is the fact that the reviewer has to review based on the coders schedule. Immediately when the coder is finished, the reviewer is expected to stop with his task and start the review together with the coder.

In such a situation the coder depends on the reviewer and has to wait until the reviewer is available. And it is not worth for the coder to start with a new task, because the reviewer should be available soon.

There is a direct dependency between the reviewer and the coder.

And direct dependencies are not good because the more direct dependencies you have in your process flow the higher the chance of blockers.

When the coder is requesting a review the reviewer might be busy with something else, maybe with something more important.

He might tell the coder that he will be available for the review soon.

And the coder is waiting, and waiting, and waiting.

5 minutes, maybe 10 minutes later they finally start with the review. During that time the coder was just waiting doing nothing, except maybe checking his facebook news feed.

In contrast, when using asynchronous code reviews the coder does not depend on the reviewer. When the coder is finished he will use some tools to create a review request and continue with his next task.

With asynchronous code reviews there is no direct dependency between the coder and reviewer and therefore no potential blocker.

2) Forced context-switching

Imagine you are working on your own task and are in the middle of implementing a complex algorithm.

Then your teammate is asking for a review because he just finished his task.

This means that you have to stop working on your own task and review the task of your coworker.

Because of this you are dragged out of the context of your own work.

This has some negative side effects: context-switching is exhausting and frustrating. And it will take you a few moments to focus on the new context.

I have been in the situation as a reviewer by myself a couple of times and I don’t like it.

When the coder started to explain his code to me I had to ask him to repeat the last one or two sentences once more. My brain was still processing the work I had been doing just before and therefore I hadn’t been able to listen to the new context.

After the review is finished you have to switch the context again. It will take you again some time to come back to your own task and the context, where you left off.

In contrast, when using asynchronous code reviews the reviewer can review the code when it fits his own schedule. He is not forced to switch the context based on the the coders schedule

The reviewer can finish with his task first and only then start with the review, reducing the amount of required context-switches to a minimum.

3) Not done consistently

Imagine you are starting to work on a new task.

At the beginning you start to investigate the existing code base to figure out which of the existing methods and classes can be reused.

Let’s assume that a lot of the required functionality is already there and you only have to put the pieces together by calling a couple of methods.

In the end there are just a few lines of code that you had to change to achieve the goal of your task.

Then you wonder whether it is really necessary to ask your colleague for a review. The code changes are actually so simple that it is almost impossible that a review might lead to any improvements.

Therefore, you decide to skip the review and just go ahead and commit the changes. You save yourself and also your colleague some time.

This is a natural and understandable behaviour, but it is dangerous.

How do you decide when a code change is simple enough so it does not require a review?

It is very hard to define rules for that.

Review everything

So the best rule is to just review everything, no matter how small and simple the code changes are.

But even then you most likely have a problem that people are not going to follow this “review everything” rule.

It is just additional effort to get a reviewer to your desk for such a small code change. You have to ask your colleague whether he has time for a review, wait for him to roll over, explain what you were doing, etc.

The barrier for performing a review is actually quite high when using a synchronous code review approach.

In contrast, when the team uses an asynchronous code review approach, then it is quite easy to create a pull request and assign it to your colleague. So requesting a code review is way simpler when you use an asynchronous review approach.

Therefore, the likelihood that a code review actually happens is also higher when using an asynchronous code review compared to a synchronous one.

This argument is implicitly also supported by a survey conducted with 550 developers in 2017.

One finding of this survey was that developers who use a code review tool are four times more likely to review on a daily basis than those using a meeting-based approach.

This result is definitely not a perfect proof for the claim that asynchronous reviews happen more often compared to synchronous ones. But I think you can safely assume that you need additional review tools for an asynchronous review type to give feedback to the code. In contrast, for a synchronous review you don’t need that much tooling, although it might help.

Therefore, the likelihood that code reviews are done consistently is higher for the asynchronous approach compared to the synchronous one.

4) Time pressure

Let’s assume your team is using synchronous reviews and your coworker just finished his task and asked you to perform a review.

So you join him at his desk and he starts to explain about the task and the complex problems he had to solve as part of his task.

As he was spending the last couple of hours to analyze that problem and to find a solution he has a lot of insights and knowledge about that task. Therefore the explanations are very sharp and right to the point.

But it is quite difficult for you as a reviewer to follow. You didn’t invest the last couple of hours to analyze the problem. You didn’t put that much thought into finding a good solution.

Therefore it is very hard for you to decide within a few moments whether the presented solution is the best or not.

But your colleague wants to get his code approved as soon as possible.

This puts you, as the reviewer, in a situation of pressure. On the one hand you want to take the time to think the solution through because you want to validate, whether it is really a good solution. But on the other hand you want to approve the solution so your colleague can move on.

In this situation you are not encouraged to question the solution too much. Even if you don´t understand exactly how the new code works and why it has been implemented that way, you are not going to ask too many questions.

Why? There are three reasons for this:

Trust in coder

First of all your coworker might explain his solution in a very self-confident manner. This builds trust and you might think, that he knows what he is doing. He was working on this for the last couple of hours and the solution seems to be solid and sound.

However, trust is good in a team in general, but not for a code review.

Actually the whole point of a code review is to distrust your colleague and double check everything to make sure the code is really doing what it is supposed to do.

Avoidance to ask questions

Second of all, you as a reviewer don´t want to ask too many questions, because it might look like that you are not able to follow the explanations—or you are just too slow to understand what is going on.

Therefore it is easier to not ask too many questions, because you don´t want to look like a fool.

Avoidance to review same code lines multiple times

Third of all, you don’t want to look through the same piece of code multiple times, when the coder is sitting next to you. Even if you review a complex algorithm that is hard to understand when looking at it for the first time, you are not going to switch back and forth between different code files for a couple of minutes while the coder is waiting next to you.

Because then it might appear that you are not able to grasp the solution. And your mind is just to slow to understand the problem and figure out the solution.

So what happens is that you are not going to look too close into validating the solution. You don’t have the time to think everything through, because of the pressure of having the coder sitting next to you.

Naturally, this results in worse code quality because the review is performed under time pressure and therefore you might miss some potential issues in the code.

In contrast, with an asynchronous review you work on your own schedule and are not under time pressure. You can take as long as you need to understand and review the solution. Therefore it is more likely that you find the bugs before you approve the code.

5) Lack of focus

Let’s say you are in the middle of coding—working on a new complex feature and you are totally focused to implement this new algorithm you have in your mind.

While you are highly concentrated your team mate next to you just finished his task and asks you for a review of his code.

Of course, you are not happy about this.

But you cannot decline the request because your colleague is waiting. He is blocked until you review  his code. Only when the code is approved and committed he can start to pick up the next task.

As you are a nice person you will roll over with your chair to your teammate and do the review.

But you are frustrated.

You want to get back to your own work as soon as possible.

And this feeling has an impact on the quality of code review you perform. You want to get it done as fast as possible, so you can continue with your complex algorithm.

Therefore you don´t look too close and just do a quick and dirty review without putting a lot of effort in it.

This has the effect that you might miss some defects in the code, which you normally would have spotted.

As a matter of fact nobody likes to get interrupted while working on a complex task that requires highest focus and attention.

However, when the team performs asynchronous code reviews you don’t have these problems.

You perform the review when it fits your own schedule, when you have the time and the full mental capacity, then you can put all your attention to the code to review.

How to fix these problems?

Ok, so now I hope I convinced you that synchronous code reviews are bad, if they are used as the default code review type within a team.

There are of course some cases where the synchronous type works best, as I described here, but these cases should not be the norm.

As I mentioned at the beginning my team uses at the time of this writing the synchronous approach as the default review type as well. And we do face all the above mentioned problems.

What are we doing to solve these issues?

Well, we are in the process of moving our codebase to git. With git in place and with the tooling provided by TFS we will be able to use asynchronous code reviews.

I am really looking forward to that, but it will still take some time to prepare the move.

Ok, that’s it for today. Let me know in the comments section if you have any remarks. Take care and HabbediEhre!

4 Types Of Code Reviews Any Professional Developer Should Know About

Every professional software developer knows that a code review should be part of any serious development process. But what most developers don´t know is that there are many different types of code reviews. And each type has specific benefits and downsides depending on your project and team structure.

In this post I am going to list the different types of code reviews and explain how each type works exactly. I am also going to give you some ideas on when to use which type.

Ok, let’s get started.

Here we go.

First of all, on a very high level you can classify code reviews in two different categoriesformal code reviews and lightweight code reviews.

Formal code review

Formal code reviews are based on a formal process. The most popular implementation here is the Fagan inspection.

There you have a very structured process of trying to find defects in code, but it is also used to find defects in specifications or designs.

The Fagan inspection consists of six steps (Planning, Overview, Preparation, Inspection Meeting, Rework and Follow-up). The basic idea is to define output requirements for each step upfront and when running through the process you inspect the output of each step and compare it to the desired outcome. Then you decide whether you move on to the next step or still have to do work in the current step.

Such structured approach is not used a lot.

Actually in my career I have never came across a team that used such a process and I don’t think I will ever be able to see that.

I think it is because of the big overhead that this process brings with it and therefore not a lot of team make use of it.

However, if you have to develop software that could cause the loss of life in case of a defect, then such a structured approach for finding defects makes sense.

For example if you develop software for nuclear power plants then you probably want to have a very formal approach to guarantee that there is no bug in the delivered code.

But as I said, most of us developers are working on software that is not life-threatening in case of a bug.

And therefore we use a more lightweight approach for code reviews instead of the formal approach.

So let’s have a look at the lightweight code reviews:

Lightweight code reviews

Lightweight code reviews are commonly used by development teams these days.

You can divide lightweight code reviews in following different sub categories:

  1. Instant code review—also known as pair programming
  2. Synchronous code review—also know as over-the-shoulder code review
  3. Asynchronous code review—also known as tool-assisted code review
  4. Code review once in a while—also known as meeting-based code review

Type 1: Instant code review

The first type is the instant code review, which happens during pair programming. While one developer is hitting the keyboard to produce the code the other developer is reviewing the code right on the spot, paying attention to potential issues and giving ideas for code improvement on the go.

Complex business problem

This type of code review works well when you have to solve a complex problem. By putting two heads together to go through the process of finding a solution you increase the chance to get it right.

Having two brains thinking about the problem and discussing possible scenarios it is more likely that you also cover the edge cases of the problem.

I like to use pair programming when working on a task which requires a lot of complex business logic. Then it is helpful to have two people think through all the different possibilities of the flow and make sure all are handled properly in the code.

In contrast to complex business logic, you sometimes also work on a task, that has a complex technical problem to solve. Here I mean for instance you make use of a new framework or explore a piece of technology you never used before.

In such a situation it is better to work by yourself because you can work on your own base. You have to do a lot of searching on the web or reading documentation on how the new technology works.

It is not helpful to do pair programming in a such a case, because you hinder each other while getting the required knowledge.

However, if you get stuck then talking to a colleague about the solution often helps you to view the problem from a different angle.

Same level of expertise

Another important aspect to consider when doing pair programming is the level of expertise of the two developers working together.

Preferably both developers should be on the same level because then they are able to work along in the same speed.

Pair programming with a junior and senior does not work very well. If the junior has the steering wheel then the senior next to him just gets bored because he feels everything is just too slow. In such a setting the potential of the senior gets restricted and therefore is a waste of his time.

If the senior has the keyboard in his hand then everything goes to fast for the junior. The junior is not able to follow the base of the senior and after a few minutes he loses the context.

Only if the senior slows down and makes sure he explains to the junior on a slower pace what he is about to do, then this setup makes sense.

However, then we are not talking about pair programming anymore. Then we are talking about a learning session, where the senior developer teaches the junior developer how to solve a specific problem.

But if both developers are on the same level then it is amazing how much work they can accomplish in such a setting. The big benefit here is also that the two developers motivate each other and in case of one of them loses focus the other developer brings him back on track again.

To sum it up: pair programming works well when two developers with a similar level of experience work together on solving a complex business problem.

Type 2: Synchronous code review

The second type is the synchronous code review. Here the coder produces the code herself and asks the reviewer for a review immediately when she is done with coding.

The reviewer joins the coder at her desk and they look at the same screen while reviewing, discussing and improving the code together.

Lack of knowledge of reviewer

This type of code review works well when the reviewer lacks knowledge about the goal of the task. This happens when the team does not have refinement sessions nor proper sprint planning sessions together, where they discuss each task upfront.

This usually results in the situation where only a specific developer knows about the requirements of a task.

In these situations it is very helpful for the reviewer to get an introduction about the goal of the task before the review is started.

Lots of code improvements expected

Synchronous code reviews also work well if there are a lot of code improvements expected due to the lack of experience from the coder.

If an experienced senior is going to review a piece of code that has been implemented by a very junior guy, then the review generally works way faster when they do the improvements together after the junior claims he is done.

Downside of forced context switching

But there is a major downside of synchronous code reviews, which is the fact of forced context switches. This is not only very frustrating for the reviewer, but slows down the whole team.

In fact, I have written a separate blog post about the 5 major problems with synchronous code reviews. Therefore I don´t get into more details about that type of code review here.

Type 3: Asynchronous code review

Then we have the third type, the asynchronous code review. This one is not done together at the same time on the same screen, but asynchronously. After the coder is finished with coding, she makes the code available for review and starts her next task.

When the reviewer has time, he will review the code by himself at his desk on his own schedule, without talking to the coder in person, but writing down comments using some tooling.

After the reviewer is done, the tooling will notify the coder about the comments and necessary rework. The coder is going to improve the code based on the comments, again on his own schedule.

The cycle starts all over by making the changes available for review again. The coder changes the code until there are no more comments for improvement. Finally, the changes are approved and committed to the master branch.

As you can see synchronous code reviews work quite differently compared to asynchronous ones.

No direct dependencies

The big benefit of asynchronous code reviews is that they happen asynchronously. The coder does not directly depend on the reviewer and both of them can do their part of the work on their own schedule.

Downside of many review cycles

The downside is that you might have many cycles of reviews, which might spread over a couple of days until the review finally is approved.

When the coder is done, it usually takes a couple of hours until the reviewer starts to review. Most of the time the suggestions made by the reviewer are then fixed by the coder only the next day.

So the first cycle already takes at least a day. If you have a couple of those cycles then the reviewing time spans over a week—and this is not even taking the time for coding and testing into account.

But there are options to prevent this long timespan to get out of hand. For instance, in my team we made the rule that every developer starts with pending reviews in the morning before he picks up any other task. And the same thing is done after lunch break.

As the developer is out of his context anyway after a longer break you don´t force unnatural context switching and still have the benefit of getting the code reviewed in a reasonable time.

Comparing the benefits and downsides of this type of code review I think that asynchronous code reviews should be the default type for every professional development team.

But before I tell you why I think that way, let’s have a look at the 4th type of code reviews.

Type 4: Code review once in a while

A long time ago I used to do code review sessions about once every month together with the whole team. We were sitting in a meeting room and one developer was showing and explaining a difficult piece of code he has recently been writing.

The other developers were trying to spot potential issues, commenting and giving suggestions on how to improve the code.

I don’t think that any teams use the once-in-a-while code review method on a permanent basis. I can only think of one situation when this type could makes sense: when the whole team has no experience at all with code reviews, then getting everyone together in a room and do the review together a couple of times might help everyone to understand the purpose and the goal of a code review.

However on a long-term the 4th type this is not an adequate technique, because it is rather inefficient having the whole team working through a piece of code.

Ok, now we have covered all types of code reviews.

So, now you might wonder which type you should choose.

Which code review type should I pick?

We talked about the formal type, which is obviously not so popular and hardly used in practice..

Then we talked about the category of lightweight code reviews and distinguished 4 different types.

Type 1, the instant code review, is done in pair programming and works well when two developers with a similar skill set are working on a complex business problem.

Type 2, the synchronous code review, works well when the reviewer lacks knowledge about the goal of the task and needs explanation by the coder. It also works well if there are a lot of code improvements expected due to the lack of experience from the coder.

But it has the downside of forced context switches, which is frustrating for the reviewer and slows down the whole team.

Type 3, the asynchronous code review, prevents the problem of forced context switching and works well for the most common use cases.

Type 4, the once-in-a-while code review is not a permanent option for a professional team and may be used only to get a team started with code reviews.

Use asynchronous reviews by default

I think that a professional team should have the asynchronous code review in place as the default type because it prevents a lot of drawbacks compared to the synchronous review.

The synchronous review can be used in case the reviewer is not able to make sense of the changes made by the coder. But in such a situation the reviewer is going to ask the coder anyway for additional clarification. And if you work in a team these situations should hardly occur.

In case you don’t have a real team and work as a group of people, then the synchronous code review makes sense. If the reviewer does not know at all what you were working on the last couple of days, then it makes sense to give a proper explanation before you do the review together.

Switching to pair programming makes sense if you have two developers with a similar skill set and work on a complex business problem. But usually a team consists of people with multiple levels of experience and it does not work on complex business problems all the time. Most of the time you have common tasks with average complexity.

Therefore, the best choice for a professional team is to use the asynchronous review by default and switch to the synchronous type or pair programming if necessary.

Ok, that´s it for today.

What type of code review does your team use? Do you know of another type of code review, which I missed here? Then please let me know in the comments.

Talk to you next time. Take care and HabbediEhre.

Scrum of Scrums

The Scrum of Scrums is a meeting on inter-team level with the purpose to surface dependencies between teams and align their collaboration.

In this article I would like to give you insights on how we implemented Scrum of Scrums at LeaseWeb.

I am going to show you a practical example.

I am not going to tell you about the theory behind it, because you can find such information in other places on the Internet.

I want to give you some practical insights on how Scrum of Scrums can be implemented, what tools we are using and the benefits and downsides we get out of it.

However, before we start, I would like to explain in a few sentences what Scrum of Scrums actually is.

Scrum of Scrums

The Scrum of Scrums has a similar purpose as the daily standup in a Scrum team. The difference is that the Scrum of Scrums is done on an inter-team level. This means, that a representative of each Scrum team is joining the Scrum of Scrums.

Similar as in the daily standup of a Scrum team, in the Scrum of Scrums each team representative has to answer three questions:

  • What impediments does my team have that will prevent them from accomplishing their Sprint Goal (or impact the upcoming release)?
  • Is my team doing anything that will prevent another team from accomplishing their Sprint Goal (or impact their upcoming release)?
  • Have we discovered any new dependencies between the teams or discovered a way to resolve an existing dependency?

If you want to learn more about the Scrum of Scrums, then I recommend to have a look at the Scrum@Scale guide. That´s also the place where the three questions come from.

The Scrum@Scale guide has been published recently (February 2018) by one of the fathers of Scrum, Jeff Sutherland.

This is definitely a good starting point to learn more about scaling Scrum.

Ok, let´s have a look on how we implemented Scrum of Scrums at LeaseWeb.

Scrum of Scrums implementation

I have been working at LeaseWeb for a couple of years. To give you a more practical insight, I would like to explain how we implemented the Scrum of Scrums at LeaseWeb.

12 Scrum Teams

At LeaseWeb there are 12 Scrum teams. Each team is responsible for a certain service or product and naturally there are often dependencies between the teams.

For instance, when building a new feature a couple of teams might be involved and have dependencies with each other. To realize the new functionality several teams have to make changes in their product. Therefore close collaboration is required. And that´s why Scrum of Scrums has been implemented.

Big board in the hallway

We have a big board in the hallway, where everyone can see it when walking by. So, if people from other departments are interested what the development teams are currently working on, they can have a look at this board.

And the board is really big, actually it is a whole wall. If you want to know the dimensions, then I would say it is about three meters high and six meters wide. It is quite an impressive view when you enter the floor of the development department.

The board is basically a big table with columns and rows. There are columns for the current sprint n, the next sprint n+1, sprint n+2 and sprint n+3. In addition there are also columns for the next  months and the next quarter.

While the columns are used to display the time, each team gets a separate row in the table and fills it with cards.

The cards

The cards contain information on a high level on what each team is doing the current sprint as well as what the teams are planning to do in the upcoming sprints.

The level of detail is higher in the current sprint and obviously decreases the more you plan in the future.

So it is totally fine to just have one card in the next quarter column, which has for instance promo written on it. This should indicate that the team plans to work on a promotion feature in the next quarter. But it is important to understand that this is the high-level plan from todays perspective. It is not written in stone and it might change – in fact it is very likely to change, because it is very difficult to plan ahead for such a long time.

However, the team should have a more detailed plan on what they are working in the upcoming sprint, especially if they have dependencies with other teams. So they can coordinate and resolve those dependencies as good as possible.

But even though the plans might change in the future, it is good to have the card on the board, because it triggers discussions with other teams and stakeholders on what is the most important thing to work on and where do we have inter-team dependencies.

The cards are magnetic and stick on the board. You can write on them with a whiteboard marker and after cleaning you can reuse them again. We also use different colors for different projects or epics. You can find those magnetic cards on Amazon and also the lines you can use build the table on the board can be found here.

 The Scrum of Scrums meeting

The Scrum of Scrums meeting itself happens once a week and is timeboxed to 15min.

A representative of each team explains what the team is doing in the current sprint and the plans for the upcoming sprint – answering the three questions, which I mentioned above. As there are 12 teams it is crucial to keep it short and don´t get into details.

In general, the focus during the explanations are the dependencies with other teams. All other details are left out.

In case there are no dependencies with other teams, then the most important goals of the team are mentioned. I usually name the sprint goal of my team there as well. Nonetheless, as I already said, the key is to keep it short.

If there are any questions during the explanation, then those are answered right on the spot, if the answer is short. In case the answer needs a more detailed explanation, then the answer is postponed after the Scrum of Scrums and people, who are interested stay after the end of the meeting to hear and discuss the details.

If the answer triggers a discussion during the Scrum of Scrums, then any of the present Scrum Masters is allowed to interrupt the discussion, and asks the participants to discuss the topic after the meeting. That´s how it is possible to keep the timebox of 15min – even though there are 12 teams.

Benefits and downsides

Let´s start with the downsides of having a Scrum of Scrums meeting.

Well, everyone is busy with his own work and has a tight schedule and there comes another meeting what you have to attend. And even though it is just 15min you need some time to prepare the board. In addition, you need to interrupt your normal work and therefore also loose some focus time, because of the content switch – before and after the meeting.

    But next to that there are also quite some benefits of having the Scrum of Scrums in place.

First of all, having this formal process in place, which forces you to think and explain what you are working on and what you are planning to do.

    And just by that you might encounter impediments with other teams earlier. So you are able to work on resolving them, before those impediments become a blocker within your team. This is going to save your team a lot of hassle, nerves and time.
    Another great benefit is that you hear what other teams are working on.

And you might have better insights why they are not able to work on the tasks your own team depends on.This gives you a better understanding of the overall direction the whole department is going and the goals the teams are working towards.

The Scrum of Scrums also fosters collaboration between teams.

I recall multiple occasions that after the meeting I went to other teams to ask for their help or somebody came to me with information I didn´t know yet.

For instance, my team planned to work on a new backup solution, which I explained briefly during the meeting. Afterwards another Scrum Master poked me and told me that his team already has something similar in place.

Finally, the two teams ended up working together on a solution and with the knowledge provided from the other team it was way easier for my team to implement a good solution.Without the Scrum of Scrums we would have probably worked on it by ourselves and present the solution after the sprint. And only then we would have figured out that we already had the knowledge and infrastructure in the department and we just did the work twice.

Closeup

In this article I gave you an overview on how we implemented Scrum of Scrums at LeaseWeb. I have explained what tools we are using and what benefits and downsides we get out of it.

Do you have Scrum of Scrums in place in your company as well? What are the differences compared to the implementation at LeaseWeb?

Or you don´t have Scrum of Scrums in place in your company yet and you plan to implement it? Let me know how it works!

Ok, that´s it for today. Stay tuned and HabbediEhre!

Understanding Docker with Visual Studio 2017 – Part 2

In Part 1 of the series “Understanding Docker with Visual Studio 2017” I described what you need to prepare to get docker up and running on your Windows machine.

In this part 2 I´m going to explain how to use Docker with a .Net project using Visual Studio 2017.

I am going to describe each file, which is created by Visual Studio for the docker integration and what commands are executed by VS, when you start your application within a docker container.

At the end of this blog post you should have a better understanding about what Visual Studio is doing for you in the background and how you can debug a docker application using those tools.

I was working on a sample project called TalentHunter, which I put on github. I´m going to use that project to explain the steps. Feel free to download that project from my github page.

Adding Docker support

It is very easy to add docker support to your .Net project.

Just create a new web or console application using Visual Studio 2017 and then right-click on the web or console project -> Add -> Docker Support.

Voila, you have docker support for your project. That´s how easy it is!

By doing this, Visual Studio creates a bunch of files, which I´m going to explain here in more detail:

  • dockerfile
  • docker-compose.yml
  • docker-compose.override.yml
  • docker-compose.ci.build.yml
  • docker-compose.dcproj
  • .docker-ignore

Let´s look at each file and figure out what it contains.

dockerfile
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WaCore.TalentHunter.Api.dll"]

The dockerfile contains information, which is used when BUILDING a new docker image. It is NOT used when starting the container – this can be confusing, but it is important to understand the difference.

In the specific case above, the image is derived from the official microsoft image for Asp.Net Core 2.0. It expects an argument with the name “source” when the docker-compose command is executed.

It defines “/app” as working directory and exposes port 80 to the outside world.

When building the image it copies the content from the path specified in the source argument to the current directory within the container. If there is no source argument specified, the contents from the path obj/Docker/publish are used.

When the container is started, it will execute the command dotnet WaCore.TalentHunter.Api.dll to start the web application.

docker-compose.yml

An application usually consists of multiple containers (eg. frontend, backend, database, etc.), but to keep it simple I have only one container in this example.

The docker-compose.yml looks as follows:

version: '3'
 
services:
  wacore.talenthunter.api:
    image: herbertbodner/wacore.talenthunter.api
    build:
      context: ./WaCore.TalentHunter.Api
      dockerfile: Dockerfile

The concept of a docker service is to define the set of images, which are required to run an application. As already mentioned this simplified example has only one image, which is named herbertbodner/wacore.talenthunter.api.

The last two lines in the file define where the dockerfile can be found, which is within the subdirectory WaCore.TalentHunter.Api and is named Dockerfile.

docker-compose.override.yml
version: '3'
 
services:
  wacore.talenthunter.api:
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
    ports:
      - "80"
networks:
  default:
    external:
      name: nat

This file completes the configuration information together with the docker-compose.yml file. It sets an environment variable ASPNETCORE_ENVIRONMENT and publishes port 80.

Next to that it also defines some basic networking configuration.

This docker-compose.override.yml file is used together with the docker-compose.yml (as we will see in a moment below).

docker-compose.ci.build.yml

If you have ever set up a traditional continuous integration/delivery pipline, you probably know that you usually have to install a couple of tools on your build server to enable a successful build of your application (e.g. build engine, 3rd-libraries, git client, sonar client, etc)

The idea of docker-compose.ci.build.yml file is to build an image, which has all necessary dependencies installed and are required to build your application. This means, that we can also create a Docker image for our build server.

However, we are not going to make use of this file in the remainder of this blog post, therefore it is out of scope. Just keep in mind that you can also dockerize your build server like this.

docker-compose.dcproj

This file is the project file for docker and contains relevant information for docker, like the version, where to find the configuration files etc.

.docker-ignore
*
!obj/Docker/publish/*
!obj/Docker/empty/
!bin/Release/netcoreapp2.0/publish/*

The .docker-ignore file contains a list of files, which should be ignored by docker.

As you can see in the file, everything is ignored (indicated by the *), except the three folders indicated in the last three lines.

Running the application with Docker

You can easily build an run the application by setting the docker-compose project as the startup project and hitting F5, or by right-clicking the docker-compose project and selecting “Build” in the dropdown menu.

Docker - Build Image

When you run the docker-compose project you will see in the output window (build section) the commands, which are executed by Visual Studio.

There are some commands, which are important in general, but not so important in order to understand what is going on. For instance, there are some commands to check, whether the container is already running and if so, the container is killed etc.

“docker-compose up” command

Nevertheless, the important command, which is executed by Visual Studio when hitting F5 is the docker-compose up command with the following parameters:

<strong>docker-compose</strong> <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\<strong>docker-compose.yml</strong>" <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\<strong>docker-compose.override.yml</strong>" <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\obj\Docker\<strong>docker-compose.vs.debug.g.yml</strong>" <strong>-p dockercompose9846867733375961963 up</strong> -d --build

The docker-compose up command builds an image and starts the container.

Let´s have a closer look to the command and it´s parameters:

The -d and –build parameters at the end indicate to start the container in detached mode and force a build. The -p parameter gives it a certain project name.

Then there are three configuration files specified with the -f parameter: docker-compose.yml, docker-compose.override.yml and docker-compose.vs.debug.g.yml.

All of them together contain the information on how to build the docker image. We already looked at the first two files a moment ago. The last configuration file (docker-compose.vs.debug.g.yml) is generated by visual studio itself and it is different depending on the build mode (debug or release).

In release mode a file named docker-compose.vs.release.g.yml is created, while in debug mode the file docker-compose.vs.debug.g.yml is used.

So let´s have a closer look at those two yml files and see the differences.

docker-compose.vs.release.g.yml
version: '3'
 
services:
  wacore.talenthunter.api:
    build:
      args:
        source: obj/Docker/publish/
    volumes:
      - C:\Users\hb\onecoremsvsmon\15.0.26919.1:C:\remote_debugger:ro
    entrypoint: C:\\remote_debugger\\x64\\msvsmon.exe /noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn /nowowwarn /timeout:2147483646
    labels:
      .....

When you build your application in Visual Studio in release mode, then two main things happen:

First of all your application is built and the build output is copied to the output folder “obj/Docker/publish/“.

Then your container is built using the docker up command with above configuration file docker-compose.vs.release.g.yml. In the configuration file you can see the build argument source, which points to the same folder, where the build output of our application is (“obj/Docker/publish/”). This argument is used in the dockerfile to copy all the content from that folder to the docker container (check again the code in the dockerfile, which we had a look at a moment ago).

On top of that a volume mount is created to the debugging tool msvsmon, but I´ll dive a bit deeper into that in a second.

docker-compose.vs.debug.g.yml

When you build your application in Visual Studio in debug mode, then the docker-compose.vs.debug.g.yml file is created by Visual Studio and used as an input to build the container. That configuration file looks as follows:

version: '3'
 
services:
  wacore.talenthunter.api:
    image: herbertbodner/wacore.talenthunter.api:dev
    build:
      args:
        source: obj/Docker/empty/
    environment:
      - DOTNET_USE_POLLING_FILE_WATCHER=1
      - NUGET_PACKAGES=C:\.nuget\packages
      - NUGET_FALLBACK_PACKAGES=c:\.nuget\fallbackpackages
    volumes:
      - D:\_repo\TalentHunter-Api\src\WaCore.TalentHunter.Api:C:\app
      - C:\Users\hb\onecoremsvsmon\15.0.26919.1:C:\remote_debugger:ro
      - C:\Users\hb\.nuget\packages\:C:\.nuget\packages:ro
      - C:\Program Files\dotnet\sdk\NuGetFallbackFolder:c:\.nuget\fallbackpackages:ro
 
    entrypoint: C:\\remote_debugger\\x64\\msvsmon.exe /noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn /nowowwarn /timeout:2147483646
    labels:
      ....

While in release mode the application build output is copied to the docker image, in debug mode the build output is not copied to the image. As you can see in the configuration file above, the source argument points to an empty directory (“obj/Docker/empty/”) and therefore nothing is copied by the docker-compose up command to the docker image.

So what happens instead?

Well, instead of copying the files, a volume mount is created to the application project folder. Therefore the docker container gets direct access to the project folder on your local disk.

In addition to the project folder, there are three other volume mounts created: two mounts to give the docker container access to the NuGet packages, which might be necessary to run the application. And another mount to the folder, which contains debugging tools.

Let´s have a closer look at how debugging works with docker containers.

Debugging the application running in Docker with Visual Studio

As already mentioned, a volume mount is created from the image to the folder “C:\Users\hb\onecoremsvsmon\15.0.26919.1\”. That folder contains the debugging tools, which come with Visual Studio.

When the container is started, then the msvsmon.exe file is executed on the container as well, because msvsmon.exe is defined as an entrypoint in the docker-compose.vs.debug.g.yml file (see above).

Then msvsmon.exe is interacting with Visual Studio and therefore we are able to set a breakpoint and debug the code as we wish.

If you want to know more how Visual Studio 2017 and Docker are working together, then I recommend following Pluralsight course: Introduction to Docker on Windows with Visual Studio 2017” by Marcel de Vries.

I love Pluralsight and if you are looking for a source with condensed information prepared in a structured way for you to consume, then I urge you  to look into https://www.pluralsight.com

CloseUp

In this post we saw how easy it is to add docker support to a .Net Core project using Visual Studio 2017.

Then we looked into all the different files, which are created automatically to figure out what they are used for.

Finally I explained what is happening in the background when you build a docker project with Visual Studio 2017 in release mode as well as in debug mode.

Now, I hope this getting-started-guide helps you to set up your own projects using docker. Let me know if anything is unclear or if you have any questions.

That´s it for today. See you around and HabbediEhre!

Understanding Docker with Visual Studio 2017 – Part 1

Containerization of applications using Docker with Visual Studio 2017 is trendy, but it is not so easy to understand what is happening in the background.

Therefore, in this blog post I´m going to explain why using containers is beneficial and what a container or image is. Then we talk about how to set up Docker with Visual Studio on Windows 10.

In part 2 of this blog post series I´m going to dive deeper in an example project and explain the created Docker files and executed Docker commands, which are simplified with the Visual Studio Docker integration.

You can find part 2 here.

Ok, let´s start with the purpose of using containers.

Why using containers?

Although it is relatively complex to set up your whole infrastructure to use containers, it gives you a lot of benefits on the long run.

However, the usage of containers is not always the best approach for every project nor company.

Using containers is usually beneficial when you use Scrum performing a lot of deployments and you are using a Microservice architecture.

Let´s have a closer look to those two areas – Scrum with regular deployments and Microservice architecture.

Scrum with regular deployments

The value of Scrum is to deliver a “Done”, useable, and potentially releasable product Increment each sprint, so you can get feedback as soon as possible.

Therefore, it is necessary to ship your new features to your customers as soon as possible. This means that you want to deploy as often as possible.

As the deployment is done quite often, it should be as simple and effortless as possible. You don´t want your team to spend a lot of time to deploy new features. Therefore, it is wise to automate the deployment.

Then you can focus your main effort on building new features instead of deploying them.

Containers can help you to simplify the deployment of (or rollback to) different versions of your application to staging environments as well as the production environment.

This is especially true if your application consists of a lot of small services, which happens a lot these days, as a new architectural style in software development is widely adopted – the Microservice architecture.

Microservice architecture

Martin Fowler, a very well known figure in the software architecture world, explains in his article, that the Microservice architecture approach has become very popular during the last couple of years.

The main concept of the Microservice architecture is to split up your application in lots of small services, which talk to each other, instead of having one big monolithic application.

While both architectures (microservice as well as monolithic) have their benefits and downsides one important distinction is that you have to deal with a lot of services when using a Microservice architecture.

Therefore, a lot of tools have been created in the recent years to manage and deploy those vast amount of services in a coordinated and automated fashion.

One very popular tool for this is Docker.

What are containers?

But what are containers exactly? What is an image? What´s the difference to a virtual machine? And what´s docker?

Let´s have a look at these terms and make them more clear, before we dive deeper into how to install Docker on Windows.

Virtual machine vs Container vs Docker

Virtualization of computers has been widely adopted in the past two decades, because it has some big advantages.

Compared to dedicated servers, Virtual Machines (VMs) are easy to move around, from one physical server to another. It is easy to create backups of the whole virtual machine, or restore to a previous state (for instance before the update has been applied, which broke the application).

Containers are a bit similar to VMs, because we can also move containers around easily and create backups of different versions.

However, the main difference is that containers are much more lightweight compared to VMs.

While each VM has it´s own operating system installed, containers can share the same operation system.

The two main advantages of containers over VMs are that containers start up much faster (in milliseconds) and use less resources resulting in better performance.

To say it in one sentence: Containers are lightweight and portable encapsulations of an environment in which to run applications.

Ok, now we know about VMs and containers. How does Docker fit in here?

Docker is a company providing container technology, which is called Docker containers.

While the idea of containerization is around for quite some time and has also been implemented on Linux (called LXC), Docker containers introduced several significant changes to LXC that make containers more flexible to use.

However, when people talk about containers these days, then they usually mean Docker containers.

Image vs Container

There is a difference between an image and a container. The two are closely related, but distinct.

An image is an immutable file, that´s essentially a snapshot of a container. It´s build up from a series of read only layers.

To use a cooking methaphor: if an image is a recipe, a container is the cake. Or using a programming methaphor: if an image is a class, a container is an object or instance of the class.

You can create multiple instances of an image, therefore having multiple containers based on the same image.

Windows Containers vs Linux Containers

You can run Docker containers on Linux as well as on Windows.

However, on Windows native Docker support is only provided on the newer versions of Windows: they are supported on Windows 10 and on Windows Server 2016 (and on Azure of course).

As we already mentioned above, containers share the operating system kernel. Therefore, we have to distinguish between Linux containers and Windows containers, because they target different kernels (Windows kernel vs Linux kernel).

However, on Windows 10 for example, we can run Windows containers as well as Linux containers.

Why and how that works, I´m going to explain in a minute.

But at first, let´s set up Docker on Windows 10.

Setting up Docker on Windows 10

This is basically very simple and just consists of two steps, which you can read up on the documentation pages of Docker. But what is missing on the Docker documentation pages is some additional information on what is happening in the background.

That´s why I´m trying to give more insights on that here.

CPU Virtualization must be enabled

Before you install anything else, make sure that CPU virtualization is enabled in your bios. At first I didn´t check this on my Laptop and got very weird errors. It took me some time to google the error “Unable to write to the database” and figure out that the problem was due to CPU virtualization was not enabled.

So make sure this is configured on your machine before you proceed.

Install Docker for Windows

Next you have to install Docker for Windows.

When you do this a couple of things happen in the background.

First of all, the Hyper-V Windows feature is enabled on your machine. This feature is a hypervisor service, which allows you to host virtual machines.

Then a new virtual machine (VM) is created. This VM runs a MobyLinux operating system and is hosted in Hyper-V. If you open the Hyper-V manager after you installed Docker for Windows, you will see something like this:

Docker HyperVManager

By default the virtual hard disk of this VM is stored on:

C:\Users\Public\Documents\Hyper-V\Virtual hard disks

This MobyLinux VM has Docker host installed and every Linux Docker container we are going to start up, will run on top of this VM.

Every Windows Docker container however, will run natively on your Windows 10 machine.

Now as you have Docker installed on your local machine, let´s have a look at how Visual Studio is integrated with Docker.

In part 2 I´m going to run through an example application for Docker with Visual Studio 2017 and explain each file, which is created by Visual Studio when adding Docker support.

I´m also going to explain the commands, which are executed by VS, when you start or debug your application within a Docker container.

You can find part 2 here.

Stay tuned and HabbediEhre!

Netherlands biggest interest in Scrum worldwide

How interested are people in Scrum? Is it´s popularity growing or declining? What about the interest in Agile or DevOps?

A few days ago I stumbled upon the website from google, called https://trends.google.com, which tells you what people search for on the Internet using the Google search engine.

It´s amazing how the Internet works today. Google alone currently receives about 60.000 search requests per second! Yes, per second!

I was actually curious to see what people, who are interested in Agile and Scrum, are looking for.

So I clicked a bit around and found 3 interesting results, which I would like to share with you:

“Scrum” most popular in the Netherlands

It turns out that people from the Netherlands searched for “Scrum” the most compared to any other country in the world. I looked at a time range of the previous 12 months (November 2016 to October 2017).

Google Trends: Scrum

The numbers are calculated on a scale from 0 to 100, and represent a search interest relative to the highest point (for the chart in the middle of the picture) or relative to the location with the most popularity for the search term (for the list on the end of the picture).

A higher value means a higher proportion of all queries, not a higher absolute query count. This means, the numbers are relative to the size of a country and therefore you can compare a very small with a very big country easily.

The result is very interesting for me, because I (still) live in the Netherlands and I know that Scrum and Agile is big here.

But I didn´t know that people in the Netherlands are leading the ranking and have the highest interest in Scrum worldwide.

St. Helena and China are (to my surprise) on the second and third place in the ranking. I will revisit both of them in a minute.

If you compare the Dutchies to Switzerland and Germany, which are on the 4. and 5. place in the ranking, then the interest there is just about 50% compared to the Netherlands.

I expected the U.K. and the U.S. somewhere in the top places, but I found U.K on rank 20 and U.S. even on rank 25.

 

Another interesting result:

“Agile software development” most popular in St. Helena

If you are like me, then you might ask yourself: Where the f*ck is St. Helena?

I had to look it up in Google maps as well, it´s a small Island somewhere in the middle of the Atlantic Ocean. It probably sounds familiar to you as well, because this is the Island, where Napoleon was banned after he lost the Battle of Waterloo in 1815.

Anyway, there live around 4500 people on the Island and they are obviously crazy about Agile software development.

If you go there for holiday you probably find agile people all over the place. 🙂

Google Trends: Agile

As you can see in the graph, the popularity of “Agile software development” was quite stable throughout the year, but started to increase since August 2017 (until today November 2017).

So, either Agile software development is really getting more traction since the last 3 months, or google made changes in one of their algorithms.

This is just interesting for me, because the amount of page views on my blog increased as well. It actually follows pretty much the same pattern as above – quite stable throughout the year, and then a linear increase within the last 3 month.

So I thought, maybe the graph in google trends is actually caused by my blog?

Well. I wish 🙂

Anyway, here the last interesting result I found:

“DevOps” is very popular in China

Come on!

China?

That´s weird!

Google Trends: DevOps

If you look at the numbers, then the Chinese interest in DevOps is way ahead. St. Helena again, on second place, has only 55%. And then we have India on the third place with only 40%.

 

But anyway, I think Google trends is a very interesting and powerful tool. And it is really fast.

It gives you insights in what people are searching for on the Internet using the Google search engine.

And the cool thing is, that Google gives you access to this information for free.

So have a look at it and play around. There are quite some nice features to discover.

Btw: if you were wondering why the graph has its minimum always at the same time in all of the three graphs, then I can tell you that people don´t want to know about Scrum, Agile development or DevOps at the end of the year. 🙂

It seems, like everyone is at home celebrating Christmas and New Year, and spending no time searching the Internet for these topics.

But that´s happening not only for these three topics, but the amount of queries in general goes down during these days.

If you want to look into this further by yourself, here are actually the links to the queries I used in google trends:

Ok, that´s it for today. Stay tuned and HabbediEhre!

The power of habit – executing tasks automatically

A habit is something that you do often and regularly, sometimes without even knowing that you are doing it. The great thing about habits is that you can train yourself to make yourself execute certain tasks regularly without even thinking.

Everyone of us has millions of habits, good ones and bad ones. And they are triggered by certain events in life.

For instance, did you brush your teeth today in the morning? You might not even remember that you did it, because it is a habit. You don’t even have to think about it, it is just part of your morning routine and triggered automatically when you wake up.

You almost need no will power to execute a habit

The great thing about a habit is that you almost need no will power to execute it. That’s why they are so powerful.

If you can make a task to a habit, which you know will help you on a long term, then you almost need no will power to consistently execute it.

For example, if you want to learn playing the piano and you make practicing it to a habit, then you don’t need any will power to get yourself in front of the piano to practice.

Or if you want to eat healthier and you make it to a habit, then you don’t have to spend your limited will power every day in overcoming the temptation to not eat chocolate. It will go automatically without even thinking.

But how do you create habits then?

Creating habits is not so easy, but if you do it regularly, then it is getting easier over time.

I was recently listening to the audio book “Superhuman By Habit” by Tynan. I highly recommend this book to everyone, who wants to learn more about habits.

Tynan explains in his book that there are two phases to build habits, the loading phase and the maintenance phase.

The loading phase

In the loading phase you want to teach yourself a new habit. Therefore you have to execute the task every day in the same routine.

For instance, after you come home from work you play the piano for 5 minutes every day.

This requires a lot of discipline and you have to constantly remind yourself to do that. For example, you can set an alarm on your phone or put a sticky note on the piano so you won’t miss it when you come home from work.

In the loading phase it is very important to execute the task always, even though your motivation might be down.

If you will really miss it once, then plan your next day in a way that it ensures you will execute the task. Because if you miss it more then once, then you basically have to start over again with the loading phase.

The maintenance phase

After the loading phase your habit is part of your daily routine and you can switch to maintenance phase.

You have built your habit to an extend that it requires almost no will power anymore to execute the task. It is automated and you don’t even have to think about it anymore.

When to switch to maintenance phase?

In general you cannot predict how long it will take to build a habit. It depends on the person itself and also on what exactly you want to train yourself to become a habit.

To find out, whether you have successfully built your habit or not, you have to ask yourself following question: “If I stop the loading phase today, would something change?”

If you answer the question with “No”, then you have built your habit. If you expect yourself to fall back to the old routine within a few days or weeks, then keep on going with the loading phase.

More flexibility in maintenance phase

According to Tynan you can allow yourself to be a bit more flexible, when you have successfully built your habbit.

For instance, you can train yourself to eat no junkfood at all. During the loading phase you are, of course, not allowed to go to McDonalds for a burger. Such a loading phase might take up to several years to built the habbit.

Afterwards, in maintenance phase, you can still stick to your plan, but be a bit more flexible. You can allow yourself to break the rule every now and then, as long as it is only exceptionial. Let’s say, that you can agree with yourself that you are allowed to eat at McDonalds, but only on Sundays and with your friends.

Drifting off

Over time though, you are going to drift off from your schedule. So you have to pay attention over the coming month and years and take correcting actions when you realize that you have drifted too much.

Benefits of good habits

As I already mentioned, the awesome thing about habits is that they don’t need any willpower to be executed. You are executing them automatically without putting a lot of thought in it.

It is of course quite some effort to built good habits. But even if the loading phase will take several years, you will have this habit most likely for the rest of your life. So investing a few years is still only a little effort compared to what you gain for the several decades afterwards.

Until now I was only talking about good habits. But you can also have bad habits. You can do things, which are bad for you, but you might not even really notice doing it.

Destroying bad habits is a chapter of it’s own. I am going to talk about it in one of the upcoming posts.

For now, let me know in a comment, for which daily tasks do you want to built habits? Is there anything, which would require a short amount of time per day, but you still are not able to consistently do it?

Ok, that’s it. I wish you a great day. Stay tuned and HabbediEhre!

 

 

Will Power – Discipline can be trained

Will power, or you can also call it discipline, is the ability to control yourself. The interesting thing about will power is that it behaves like a muscle: it needs time to recover after you strained it, but you can train it to get more strength.

On the one hand side you need will power to overcome inner temptation and force yourself not to do things. Such things, which are bad for you, but you are so used to.

And on the other side, you need will power to force yourself to do things, which you know would be good for you.

For instance, you need will power to overcome the temptation to not eat sweets when you are on a diet. And you also need it to push yourself out of the couch and to the gym to get in shape.

It seems that people, who achieve a lot in life have a very high will power, otherwise they would not be able to achieve those things. For instance, athletes train every day, while for “normal” people it is hard to even go to the gym once or twice a week.

Will power is limited

Science found out in multiple experiments, that will power of humans is limited. Similar to muscle power, you have only a limited amount of will power per day.

For example you can lift weights only a certain amount of times until your muscles get tired. Then your muscles need some rest to recover, before you can use them again.

Similarly, you can stretch your will power only to a certain degree, after that you need some time to recover to gain back your strength.

I was recently reading the book “The power of habit” by Charles Duhigg. I highly recommend the book, if you want to know more about habits from a scientific point of view.

Anyway, in the book he mentions an experiment, which has been conducted in the 90s.

The experiment

76 undergraduates participated in the experiment. They were told, that the goal of the experiment was to test taste perception. But his was not true, the real goal was to test will power.

Cookies vs radish eater

Each of the participants was put in a room with a plate of food. Half of the plate was filled with warm, fresh cookies and the other half was filled with radish, the bitter vegetable.

Then the scientist told half of the participants, that they are only allowed to eat the cookies and the other half was only allowed to eat the radish. Then the scientist left the room and the participant was on his own.

Of course, the cookie eater were in heaven and the enjoyed the fresh and sweet cookies.

The participants assigned to the radish were craving for the cookies, but they had to stick to the bitter vegetables. It took them a lot of discipline to resist the warm cookies. Ignoring radish is easy, but ignoring the cookies requires a lot of discipline.

The unsolvable puzzle

After a few minutes the scientist came back to the room, removed the plate and told the participant that they need to wait for few minutes. The participant got a puzzle, which they should solve while they were waiting.

The scientist told the participant, that they could ring the bell, if the need something. Then the scientist left the room.

Actually, the puzzle was the most important part of the experiment. It was not solvable. When you try to solve it you will always fail and it requires a lot of discipline to try it again and again and again.

The cookie eater were very relaxed. They tried to solve the puzzle over and over again. They still had a lot of will power left. Some of the cookie eater spend more than half an hour before they rang the bell.

The radish eater on the other hand, were very frustrated and they rang the bell much earlier. Some of them were angry and told the scientist, that this is a stupid experiment.

After all, the cookie eater rang the bell after 19 minutes on average, while the radish eater rang the bell only after 8 minutes. The cookie eater spent more than the double amount of time to solve the unsolvable puzzle.

This means the radish eater had already consumed a lot of their will power when they were resisting the cookies.

Will power can be trained

There are a number of conducted experiments showing that will power can be trained. If you want to know more, than read the book, which I already mentioned above: “The power of habit” by Charles Duhigg.

You can train your will power, like you can do with your muscles. Training every day makes your muscles stronger and you are able to lift more weights.

It is the same with will power, training it increases the strength and the maximum of your daily will power increases.

Don’t rely on will power

I thought that successful people have a lot of will power, or discipline. Otherwise how would they be able to achieve all those things?

To my surprise it is not will power, that drives successful people. Will power is important, but the key here is to build habits.

Building a habit requires discipline, but when you have built one, then executing it requires almost no discipline at all.

For example, when you were a kid, then your parents probably always had to remind you to brush your teeth, before you go to bed. It required a lot of discipline by your parents to push you and build this habit for you.

Nowadays you probably don’t even think about that you have to brush your teeth before you go to bed. It just happens automatically and doesn’t require a lot of discipline. The habit has been build, executing it is easy.

Anyway, the whole topic about habits is a very big and interesting one. Therefore I will dedicate one of the upcoming blog posts to that topic.

Ok, that’s it for this week

The next time you come home from work with the plan to go running, but you don’t have any motivation at all, then remember that will power and discipline can be trained like a muscle 🙂

Stay tuned and have a great week. HabbediEhre!

Nexus – the scaling Scrum framework

Nexus is a framework, which builds on top of the scrum framework and is designed for scaling. It focuses on solving cross-team dependencies and integration issues.

What is Nexus?

The Nexus framework has been created by Ken Schwaber, co-creator of the Scrum framework.

Similar to the scrum guide, there is also the Nexus guide, which contains the body of knowledge for the framework.

It has been released by scrum.org in August 2015.

You can find the definition of Nexus in the Nexus guide as follows:

Nexus is a framework consisting of roles, events, artifacts, and techniques that bind and weave together the work of approximately three to nine Scrum Teams working on a single Product Backlog to build an Integrated Increment that meets a goal.

The Nexus framework is a foundation to plan, launch, scale and manage large product and software development initiatives.

It is for organizations to use when multiple Scrum Teams are working on one product as it allows the teams to unify as one larger unit, a Nexus.

Scrum vs Nexus

Nexus is an exoskeleton that rests on top of multiple Scrum Teams when they are combined to create an Integrated Increment.

Nexus is consistent with Scrum and its parts will be familiar to those who have worked on Scrum projects.

The difference is that more attention is paid to dependencies and interoperation between Scrum Teams.

It delivers one “Done” Integrated Increment at least every Sprint.

New Role “Nexus integration team”

The guide defines a new role, the nexus integration team.

It is a Scrum team, which takes ownership of any integration issues.

The Nexus integration team is accountable for an integrated increment that is produced at least every Sprint.

If necessary, members of the nexus integration team may also work on other Scrum Teams in that Nexus, but priority must be given to the work for the Nexus integration team.

Event “Refinement”

In Nexus the refinement meeting is formalized as a separate scrum event.

In the cross-team refinement event Product Backlog items are decomposed into enough detail in order to understand which teams might deliver them.

After that dependencies are identified and visualized across teams and Sprints.

The Scrum teams use this information to order their work to minimize cross-team dependencies.

Event “Nexus Sprint Planning”

The purpose of nexus Sprint Planning is to coordinate the activities of all Scrum Teams in a Nexus for a single Sprint.

Appropriate representatives from each Scrum team participate and make adjustments to the ordering of the work as created during Refinement events.

Then a Nexus Sprint Goal is defined, an objective that all Scrum Teams in the Nexus work on to achieve during the Sprint.

After that the representatives join their individual Scrum teams to do their individual team Sprint Planning.

Event “Nexus Daily Scrum”

The Nexus Daily Scrum is an event for appropriate representatives from individal Scrum Teams to inspect the current state of the Integrated increment.

During the Nexus Daily Scrum the Nexus Sprint Backlog should be used to visualize and manage current dependencies.

The work identified during that event is then taken back to individual Teams for planning inside their Daily Scrum events.

Wrap up

This is a very highlevel overview about Nexus, you can find more information in the Nexus guide or in this nice introduction.

I am practicing Scrum for quite a while now, but I have never heard about Nexus before. Only last week I stumpled upon it on the Internet.

Therefore I also don’t know anybody who is actively using the Nexus framework in their daily work.

But from this point of view it looks like Ken Schwaber did a very good job again when defining this framework.

I hope that I will have the chance some time to work with Nexus in real life. Of course, for that you need to have the environment where it would make sense to give the Nexus framework a try.

Ok, that’s it for this week. Have a good day and HabbediEhre!

Scrum values – new section in scrum guide

The scrum guide, the official description of the scrum working methodology, has been recently extended with a new section: the scrum values.

Before we dive deeper in what the scrum values are about and what they mean, I want to give you a quick overview on the history of scrum, and especially about the history of the scrum guide.

Then it should be easier to understand the big picture, when and why the scrum guide has been created and updated over the years.

History of scrum

Scrum is now about 21 years old.

Jeff Sutherland and Ken Schwaber, the fathers of scrum, presented their paper “SCRUM Software Development Process” the first time in 1995 at the Oopsla conference in Austin, Texas.

But the name scrum was not their invention. They inherited the name from the paper The New New Product Development Game published by Takeuchi and Nonaka a few years earlier.

Scrum is used to solve complex problems. It uses an empirical approach and solves problems based on experience and facts.

Until today scrum has been adopted by a vast amount of software development companies around the world. But scrum has also been successfully applied in other domains, for instance manufacturing, marketing, operations and education.

History of scrum guide

Jeff and Ken published the first version of the scrum guide in 2010, which is 15! years after their first presentation of scrum at the conference. I don’t know why it took them so long nor what has been used by the community before that.

Jeff and Ken made some incremental updates to the scrum guide in 2011 and 2013. Together they established the globally recognized body of knowledge of scrum, as we know it today.

Recently, in July 2016, a new section has been added to the scrum guide: the scrum values.

The scrum values

Successful use of Scrum depends on people becoming more proficient in living following five values: commitment, courage, focus, openness, respect.

Let’s have a look at each of those values including the one sentence of description from the scrum guide.

Commitment

People personally commit to achieving the goals of the Scrum Team.

I am personally very happy that this value has been added to the scrum guide and is now officially recognized as an important value for a scrum team.

It is a challenge to get the team to commit to the goal of a sprint, but it is a cruicial part in making the team successful.

Courage

The Scrum Team members have courage to do the right thing and work on tough problems.

People should stand up for their beliefs and for what they think is important to put the team to the next level.

People in a scrum team should not be afraid of though problems. They face even the thoughest problems and try to solve them together.

Focus

Everyone focuses on the work of the Sprint and the goals of the Scrum Team.

People are not picking up work outside of the committed sprint. They focus on the work, which has been agreed upon in the sprint planning.

Everybody works on what is good for the team, not what is good for themselves. The goals of the team have a higher priority than the personal goals.

Openness

The Scrum Team and its stakeholders agree to be open about all the work and the challenges with performing the work.

There is no hiding of facts or not telling the whole truth about what is going on. There are no political games played.

Everybody is honest about the problems they face and for instance explains why things are delayed to give the stakeholder insights of what is happening within the team.

Openness in the team as well as openness to the stakeholders builds trust and a better working environment.

Respect

Scrum Team members respect each other to be capable, independent people.

Respect is one of the key elements of building a good culture within the team and in the whole organization.

Treat people as you want to be treated!

Why has the scrum guide been extended?

How were those changes in the scrum guide implemented?

Ken and Jeff built a community around the scrum guide and they are running an online platform, called users voice, where people can make suggestions for changes in the scrum guide.

Other people can vote for those suggestions, but the final decision for making changes in the scrum guide is still taken by Ken and Jeff.

Wrap up

Today scrum is recognized as the most applied framework for agile software development.

The scrum guide plays a very big role in the success of scrum, because it is the first reference point for people wanting to learn more about it.

The newly added section containing the scrum values are a very welcome idea to improve the scrum framework even further.

I will definitely present those scrum values to my team to make them think about this topic and get a discussion rolling. Let me know, if you did the same and share your experience in a comment.

That’s it for this week. Stay tuned and HabbediEhre!