Scrum Retrospective 4 – Decide What To Do

This is the fourth post of my blog post series about the five phases of a Scrum Retrospective. In this post I cover Phase 4— Decide What To Do.

If you haven´t read the previous posts in this series you can start with Phase 1 —Setting the stage

These five stages are presented in the book Agile Retrospectives – Making Good Teams Great by Esther Derby and Diana Larsen. They are:

  1. Set the Stage
  2. Gather Data
  3. Generate Insights
  4. Decide What to Do
  5. Close the Retrospective

I use this five-step-approach as a guideline in each Retrospective meeting, which I lead as a Scrum Master.

Until now we have covered the first 3 phases. After Phase 1—Setting The Stage we spend a considerable amount of time in Phase 2—Gather Data to identify the most crucial problems of the team.
Then I covered Phase 3—Generate Insights in my previous post. At the end of that phase we had a list of possible root causes and potential solutions for a problem.

Now, let’s go on with Phase 4 and decide what we are going to do about the problem.

Decide What To Do

The goal of this phase is to create action items to improve in the next iterations.

You identified a list of possible root cause of the problem and potential solutions. Now you want to decide what you want to do differently in the next Sprint.

Therefore you create a list of action items what you exactly want to do differently.

When creating action items there are a couple of things you want to keep in mind:

  1. Make action items actionable
  2. Make action items small
  3. Don’t pick too many action items
  4. Make action items visible

Let’s look at each bullet point of this list a bit more closely.

1) Make action items actionable

You want to phrase your action item in a way that it is completely clear what needs to be done. Be as specific as possible. It shouldn´t require a discussion with the team whether an action item can considered to be completed or not.

An example for a bad action item is “Improve team collaboration”. Phrasing it like this does not tell you what you need to do exactly. It leaves a lot of room for interpretation.

A better example would be “Pair programming on Monday and Wednesday from 10:00 to 12:00”. This tells you exactly what you need to do and when you need to do it. And only if you have really worked together in pairs for those two hours it is clear to everyone in the team that you can mark the action item as completed.

2) Make action items small

You want to make action items small enough so that they don´t have an impact on the amout of planned work for the upcoming Sprint. At this point in the Retrospective you don’t want to commit to work on a big action item.

Planning and prioritizing is done in Sprint Planning, but not in the Retrospective meeting.

If the action item requires a couple of days effort to be completed, then it is definitely too big.

For instance, “Implementing 2-factor authentication for the web application” is too big for an action item of a Retrospective.

If your team is sure that they want to work on that with high priority, then the action item might be “Create a user story for 2-factor authentication and put it on top of the backlog”.

Big action items contain the risk that they are not worked on or cannot be finished in the Sprint. Or something else might be considered more important in the next Sprint planning.

If that happens regularly, then your whole Retrospective meeting has become just a meeting where you commit to things you wish to improve instead of actually improving them.

Having small action items makes sure that they will be completed. Then your team will be consistent in making sure that action items are always finished.

3) Don’t pick too many action items

If you have too many action items it is likely that the team will forget about some of them. It is easy to remember 3 things, but it is a lot more difficult to remember 7 or even more things.

Therefore I make sure that my team creates a maximum of 3 action items per Retrospective so we can keep the focus on the few most important items.

4) Make action items visible

Another method, which proved to work very well, is to make action items visible to the team.

You place the stickies with the action items at a place where the team can see them. Usually I put them on the physical scrum board, which is at the team area. Additionally, you can use some “screaming” colors, like pink or orange, so that they stand out on the board.

Then, a couple of days after the Retrospective meeting, when you see that an action item is not marked as done yet, you can ask the team during Standup about the status. You make the action item “visible” in their mind by pointing it out during the Standup.

So, by making the action items visible to the team in different ways, you can do your best to make sure they will be worked on and completed until the end of the Sprint.

Try-Measure-Learn Loop

There is one more important thing, which you should keep in mind when creating your action items:

Make clear to the team that you are dealing with complex problems here.

Each problem does not have just one root cause and exactly one solution. There might be a combination of circumstances, which lead to your specific problem and therefore there are also multiple things you need to do in order to solve it.

So mostly there is not one obvious thing what you can do to fix the problem. Make clear to the team that Scrum is an empirical approach—you are working with a try-measure-learn loop.

If you have identified a possible solution, you cannot be sure whether this solution might really work. But you can decide to give it a try for a couple of Sprints and measure its outcome.

Then based on what you measure you can learn that this is a good solution and you keep it. Or you might measure bad results and decide to drop that solution, because it didn’t help to fix the problem.

If this is not clear to people, I noticed that some respond very negative to certain action items.

For instance, imagine you have a big issue and you have an action item to tackle that problem. Then, one simple action item is often just a drop in the ocean. This means that even if it is completed it will not have a big impact.

Therefore it is not surprising that some people will react like “That’s not gonna help anyway! We are just wasting our time here with trying to solve a problem we cannot tackle anyway”.

So, making clear to the team that we are working with a try-measure-learn model is a good way to make clear that this is just the first step in the hopefully right direction. This might shift their mind to a more positive attitude regarding the created action items.

Last Step

Ok, so at the end of this phase we have created a few small, actionable items to improve our process. Now we can continue with the last Phase 5—Close the Retrospective.

I will cover the details about phase 5 in my next post of this series, which I am going to publish in about 1 week. So stay tuned and keep an eye on the blog.

Ok, that’s it for today. See you around, HabbediEhre!

Scrum Retrospective 3 – Generate Insights

This is the third post of my blog post series about the five phases of a Scrum Retrospective.In this post I cover the Phase 3— Generate Insights.

If you haven´t read the previous posts in this series you can start with Phase 1 —Setting the stage.

These five stages are presented in the book Agile Retrospectives – Making Good Teams Great by Esther Derby and Diana Larsen. They are:

  1. Set the Stage
  2. Gather Data
  3. Generate Insights
  4. Decide What to Do
  5. Close the Retrospective

I use this five-step-approach as a guideline in each retrospective meeting, which I lead as a Scrum Master.

We have covered Phase 2 —Gather Data in my previous post. At the end of that phase we had a list of subjects for further discussion.

Now, let’s go on with Phase 3 and generate some insights from those subjects.

Generate Insights

The goal of this phase is to dive deeper in at least one of the subjects from the previous phase. We want to uncover the root cause why certain things happened. And then we want to find options for a possible solution.

We can split this phase up in two steps.

At first we decide which particular subject we want to select. So we have a focus point on one specific subject rather than talking about multiple topics at the same time.

In the second step we dive deeper into the selected subject to find the root cause.

Step 1: Decide for a subject

The decision, which is the most important or valuable subject to pick is sometimes obvious. Often there is a topic, which bothers the team the most. So you as a Scrum Master will obviously decide for that particular subject.

For instance, if the application, which is managed by the team, had a big outage during that sprint, then this is probably the most important topic for the team.

In case the team didn’t already have a separate Blameless Post Mortem for that outage, you as the retrospective leader will obviously decide for that topic.

Dot-Voting

But sometimes it is not so obvious. In such a situation I usually make a pre-selection of possible candidates and then ask the team to vote.

The pre-selection is just done to make the voting easier, because there are often some subjects that you don’t need to dive in deeper.

For instance, if somebody mentioned that he is so happy that he finished his Professional Scrum Master certification, then this is really great to share within the team. But you probably don’t want to dive deeper into that subject.

After the pre-selection I usually use dot-voting, which works as follows.

Everybody gets a set of virtual dots and can place them on the sticky notes on the board. The sticky note with the most dots wins.

For instance, everyone gets 3 dots. Then they can go to the board with the sticky notes and place their dots there. Afterwards you count the dots and the topic with the most dots will be selected.

Ok, now as we have a selected topic, we can dive deeper.

Step 2: Dive deeper

The goal of this step is to find the root cause of the selected problem.

There are different activities to get to the core of a problem. The retromat can help you to find a lot of possible activities.

My favorite options are to have a discussion with the team or to use the 5 Whys activity. Let’s have a closer look at both of these options.

Facilitating a discussion

Having a focused discussion on a specific issue is a very natural and simple approach.

I have made the experience that people, especially very analytical persons like software developers, don’t like to participate in “weird” activities. One developer once asked me why we are doing these “Montessori” games while we could use our time much more effectively by writing some code.

So I tend to keep the amount of unknown activities low—especially with a team, which is not very familiar with retrospective meetings. Therefore just having a discussion feels very natural to everyone involved, because you have discussions all the time. And a good discussion can create very good results as well.

In order to have a good discussion you need some facilitation skills.

During a discussion some people might be very outgoing and talking and talking and you can hardly stop them. Some other people like to stay in the background and don’t say much. They just give their opinion when you ask them directly.

As a good facilitator it is your job to notice such behavior and do something about it.

For instance, you might interrupt the talking Mary and throw the ball to introvert Tom like this: Thank you Mary for your opinion. Now, Tom, you didn’t say much. What do you think about it?

Another thing I tend to do during discussions is to make some notes about the most crucial points that are discussed. I try to identify possible reasons for problems and possible solutions.

Then I put them on the wall, so the group can use that information in the next phase to take a decision.

To be honest this is very hard to do. Because on the one hand you need to facilitate the discussion and on the other hand you should make notes.

While practicing you will get better over time, but at the beginning I think it is not necessary to make notes. It is more important to properly facilitate the discussion.

Ok, so having a good discussion is a nice way to find the core of a problem. Another option to achieve the same result is the 5 Whys activity.

5 Whys

The 5 Whys activity is a method to drill down to a problem by repeatedly asking Why?

The name comes from the fact that often you often find the root cause of a problem with the fifth answer.

For example, let’s imagine the team struggled with an outage of their product during the sprint. And we want to identify the root of the problem with the 5 Whys activity.

Why was there an outage? Something went wrong during deployment and so the application didn´t start anymore.

Why went something wrong? Because Tom, who started last month, made a mistake. He did it the first time and didn´t have the experience.

Why did he do it the first time? Because Mary was on holiday so somebody had to do it. Unfortunately he didn’t get a proper training before that.

Why was there no training? Because it is not part of the onboarding process.

So we identified the root cause of that problem. And the solution might be to add a proper deployment training to the onboarding process.

So by repeatedly asking Why? you can drill down to the core of a problem. And after identifying the root cause you can look for possible solutions.

Recap

Ok, let’s quickly recap the important things of the Generate Insights phase.

The goal of this phase is to drill down to the root cause of a specific problem and find possible solutions.

In the first step we decide on a specific subject, which we want to analyze. For instance by using dot-voting.

In the second step we dive deeper. Among a list of possible options I prefer to have a discussion or use the 5 Whys activity to get to the core of the problem.

When you are done with one problem and there is enough time left for your retrospective you can also try to tackle another subject.

Ok, so at the end of this phase we have analyzed the root cause of at least one specific problem. Now we can continue with the next phase.

You can read about the details of the next phase here: Phase 4—Decide What To Do.

Meanwhile, let me know in the comments about your favorite activities for the Generate Insights phase. Maybe you have something you absolutely love to practice with your team?

Ok, that’s it for today. See you around, HabbediEhre!

Scrum Retrospective 2 – Gather Data

This is the second post of my blog post series about the five phases of a Scrum Retrospective.In this post I cover the most crucial ideas for Stage 2—Gather Data.

If you haven´t read the previous post in this series you can find it here: Stage 1 —Setting the stage

These five stages are presented in the book Agile Retrospectives – Making Good Teams Great by Esther Derby and Diana Larsen. They are:

  1. Set the Stage
  2. Gather Data
  3. Generate Insights
  4. Decide What to Do
  5. Close the Retrospective

I use this five-step-approach as a guideline in each retrospective meeting, which I lead as a Scrum Master.

Ok, let’s get to the meat.

Gather data

The goal in this phase is to bring the facts of the sprint to the table, so that every participant has the same picture of what happened during the iteration.

I usually split this phase up in two steps. First I announce the hard facts and statistics based on the data the team generated during the sprint. Secondly, we want to get the insights and personal opinions from each individual to generate a complete picture.

Step 1: Hard facts

The idea of this step is to make the status quo transparent, based on the facts you already have. This type of data is usually generated during the sprint and does not reflect any personal opinions, but hard facts.

It is the responsibility of the Scrum Master to get this information before the meeting starts. Even though the Scrum Master doesn’t have to get the data by himself, he is responsible to make sure the data will be available for the meeting.

This data includes: the sprint goal and the amount of planned and delivered story points. Next to that it also includes any other hard facts, which are measured during the sprint.

Sprint goal

First of all I name the sprint goal and we figure out whether we did achieve our plans. Most of the time it is obvious whether or not we made the sprint goal, but sometimes there is a short discussion within the team. This is fine, because I want a collaborative decision if we did or did not make it.

I use this information also to keep track of how the team is doing over time. And you can mention in the retrospective that the team achieved the sprint goal for instance 5 times in a row.

Story points

I mention how many story points we initially planned for the sprint and how many were successfully finished.

Here it is a good idea to have an excel sheet with historical data prepared and show in a graphical overview how the team is doing over time. Keeping this historical data in a diagram can give you insights easily in how the team grows over time, is more productive, finishes more work etc.

Other measured data

You can mention at this point any other important data, which has been measured during the sprint.

For instance, if your team struggles with too many open bugs and too little of them are solved and closed during the sprints, then this is important data to keep track of. This data is usually available easily by having a look at the bug-tracking tool you are using.

In such a case you can announce the amount of open bugs before and after the sprint.

If your team has constantly problems with the high amount of incidents during the sprint, which prevents the team from making good progress with the planned user stories, then this is also important data to keep track of. This data is also usually available in a bug-tracking tool and you can bring this data to the group here as well.

Basically any interesting data, which is measured and might be important to the team, is welcome at this point to share with everyone. This is because it helps that everybody has the same picture of what happened during the sprint.

Data you don’t want to show

But there is also some type of data, which you don’t want to show to the team. This is any kind of data, which might put a specific person on the spot—unless you specifically intend to do so.

For instance, showing how many story points were delivered by each person brings the lowest-scoring person in an uncomfortable situation and doesn´t help the team. Therefore this should be avoided.

You should also make sure to just bring data, which has been measured and therefore is a fact. Don´t bring data, which you think is a fact, but actually is just your personal opinion.

For instance if you know that people didn’t do as much pair programming as initially planned, but you didn’t measure how often they actually worked together in pairs, then don’t mention it at this stage.

At this point it is important to just give the hard facts, which have been measured, to the team so that everyone knows what was going on in the sprint.

Step 2: Personal opinions from each individual

When the hard facts are on the table, the second step in the Gather Data phase is to collect the personal opinions and feelings of each individual.

Here it is important that everyone has a voice. Therefore I always use a format, which gives each individual some time to think on his own and express his or her own opinion.

Silent writing

In order to achieve this I always use a couple of minutes of silent writing, where each person writes down the items on sticky notes. So everyone has to think about his or her own experience, feelings and events during the sprint.

If you don’t use sticky notes, but just start a discussion where everyone shares his or her thoughts, then you usually end up with a very unbalanced set of data. Because more confident people will take over the floor and express their opinions while other participants won´t be able to contribute their thoughts.

Especially if there are a few introverts in the team, these people are not going to speak up while for example the senior developer taking over the floor. Even though the opinion of the introverts might be very valuable, they want have the chance to contribute them to the discussion.

Therefore, for the gather data step I always use a format, which starts with a few minutes of silent writing followed by a round of explanation, where each person explains his or her stickies to the group.

Formats for the gather data step

There are many different formats for the gather data step out there, which basically all work the same way with just slight differences in asking the questions. For instance, there is the Start Stop Continue format, or the Mad Glad Sad retrospective, or you can use the Sailing Boat.

Some of these formats work better than others in different situations. For instance, if the sprint went really well and there is almost nothing to complain, then the Start Stop Continue retrospective is a good way to generate some ideas for improvements.

If you use the Mad Glad Sad retrospective to gather data when a sprint worked really well, then it will basically work, but in my experience you will get much better results using the Start Stop Continue format. If you want to know more about that check out my post about the Start Stop Continue retrospective.

Recap

Ok, let’s quickly recap the important things of the Gather Data phase.

The goal of this phase is to bring all relevant data to the table so that every participant has the same information about the sprint.

This data consists of the hard facts, which have been measured during the sprint. These are the Sprint goal, the amount of planned and delivered story points and any other relevant facts, which have been measured.

The second step is to get the opinions and feelings from each individual by a few minutes of silent writing. After that each individual presents them to the team and clarifies what he means.

At that point you have all the important data ready and can continue with the next phase, which is Phase 3—Generate Insights.

You can read about the next post in this blog post series here: Phase 3—Generate Insights.

Meanwhile I would be interested in formats you mostly use to Gather Data in your Scrum retrospective. Leave a comment if you have any interesting insights, which formats work well in specific situations!

Ok, that’s it for now. See ya in a bit, and HabbediEhre!

Scrum Retrospective 1 – Setting The Stage

This is the first post of my blog post series about the five phases of a Scrum Retrospective. I will cover the most crucial ideas for Phase 1 — Setting the stage.

These five phases are presented in the book Agile Retrospectives – Making Good Teams Great by Esther Derby and Diana Larsen. They are:

  1. Setting the stage
  2. Gather Data
  3. Generate Insights
  4. Decide What to Do
  5. Close the Retrospective

I use this five-step-approach as a guideline in each of the retrospective meetings, which I lead as a Scrum Master.

In this post I will explain what the leader of an agile retrospective needs to know and should do in order to have a successful start of the retrospective meeting.

Setting the stage

The goal of the first phase is to bring the mind of the team to the retrospective meeting so they have their focus on the work at hand.

You want to set an environment where everybody feels save to speak. Everyone should be in a state that he feels like he wants to contribute his thoughts and ideas as much as possible.

People have been working on other tasks just a few minutes ago before they had to stop and go to the retrospective meeting.

Their mind is still filled with the thoughts of their previous task. For example, one colleague might still be thinking hard on how to solve that bug he is currently working on.

The goal for you as the leader of the meeting is to bring the focus of the team to the work at hand.

What, Why, When

In order to set the stage for the meeting you are starting with a short introduction giving some facts about the meeting itself.

You mention the goal of the meeting, which is to reflect on the previous sprint. And you announce the available time for the meeting, giving the participants an outlook on what they can expect how long they will be locked up in this meeting room.

For instance you might say something like this:

Here we are again at the end of our sprint—this time we completed sprint number 56. Let’s look back together and figure out what went well and what didn´t so we can improve our process and become even more productive. Now it´s 4 minutes past 2, so we have exactly 56 minutes to come up with some results.

After this announcement you should have the groups attention.

I learned that there is another important part of Setting the stage, which is to get everybody to say a few words.

Get everybody to speak up

At the start of a retrospective it is important that everyone says something out loud. If you speak out loud at the beginning of the meeting the chance that you are going to join the follow-up discussion increases drastically.

I couldn’t find where I have read about this fact, but I remember that it was packed with some awesome statistics. Since then I always perform this little exercise at the beginning of a meeting.

There are a couple of possible activities in order to give everyone the chance to speak up.

I usually just ask a simple, open-ended question and ask the people to answer shortly one after another going around in a circle.

The question might be: How do you feel about the previous sprint on a scale from 1 to 5?

Another example is to ask the participants to state in one sentence what they want to get out of this retrospective meeting.

And another example might be: Tell us how you feel about the previous sprint in maximum of 3 words!

The retrospective leader has to make sure that the statements are really short, which means just a few words. You want to avoid that people start a discussion at this stage. You just want to get everybody to speak up to generate the mind-shift and enable maximum contribution of everybody in the next phases.

When everyone has put some thoughts to answer the question then the group should have the focus on this meeting. Everyone has put his previous task on hold and put his attention to the work at hand. The mindset of the group is in a state of getting some work done.

Next: Phase 2

At this point you are finished with Setting the stage and you move forward to the next phase, which is: Phase 2—Gather data.

You can read about the next post in this blog post series here: Phase 2—Gather data.

Meanwhile I would be interested in how you kick-off your Scrum retrospective meeting and what you do differently. Do you have any interesting insights, which you can share in the comments?

Ok, that’s it for today. See you in a bit, and HabbediEhre!

5 Major Problems With Synchronous Code Reviews

Reading Time: 8 minutes

If your teams does not use git as a source control system then you should continue reading. Because then you most likely use the synchronous code review type and that’s giving you 5 major problems.

But even if you do use git as a source control system then you still might use the synchronous code review type. If so, then you still have these 5 problems—even though you use git.

With synchronous review I mean a code review, that is performed together in front of the coder´s screen immediately after the coder finished coding.

In contrast, the asynchronous review is done by the reviewer on his own screen, on his own schedule. The reviewer uses some tools to write comments, that are then forwarded to the coder to improve the code.

If you want to know more details about the differences of code reviews I recently wrote a separate blog post about the 4 types of code reviews.

At the time of this writing my team uses Team Foundation Server (TFS) and its built-in source control TFSC (Team Foundation Source Control). Most of the time we also use synchronous code reviews and I found a couple of problems with that approach.

Ok, so what are the downsides of synchronous code reviews?

5 major problems with synchronous code reviews

Synchronous code reviews produce following problems:

  1. Direct dependencies
  2. Forces context-switching
  3. Not done consistently
  4. Time pressure
  5. Lack of focus

Let’s have a look at each of these issues in more detail.

1) Direct dependencies

A big problem for the synchronous code review is the fact that the reviewer has to review based on the coders schedule. Immediately when the coder is finished, the reviewer is expected to stop with his task and start the review together with the coder.

In such a situation the coder depends on the reviewer and has to wait until the reviewer is available. And it is not worth for the coder to start with a new task, because the reviewer should be available soon.

There is a direct dependency between the reviewer and the coder.

And direct dependencies are not good because the more direct dependencies you have in your process flow the higher the chance of blockers.

When the coder is requesting a review the reviewer might be busy with something else, maybe with something more important.

He might tell the coder that he will be available for the review soon.

And the coder is waiting, and waiting, and waiting.

5 minutes, maybe 10 minutes later they finally start with the review. During that time the coder was just waiting doing nothing, except maybe checking his facebook news feed.

In contrast, when using asynchronous code reviews the coder does not depend on the reviewer. When the coder is finished he will use some tools to create a review request and continue with his next task.

With asynchronous code reviews there is no direct dependency between the coder and reviewer and therefore no potential blocker.

2) Forced context-switching

Imagine you are working on your own task and are in the middle of implementing a complex algorithm.

Then your teammate is asking for a review because he just finished his task.

This means that you have to stop working on your own task and review the task of your coworker.

Because of this you are dragged out of the context of your own work.

This has some negative side effects: context-switching is exhausting and frustrating. And it will take you a few moments to focus on the new context.

I have been in the situation as a reviewer by myself a couple of times and I don’t like it.

When the coder started to explain his code to me I had to ask him to repeat the last one or two sentences once more. My brain was still processing the work I had been doing just before and therefore I hadn’t been able to listen to the new context.

After the review is finished you have to switch the context again. It will take you again some time to come back to your own task and the context, where you left off.

In contrast, when using asynchronous code reviews the reviewer can review the code when it fits his own schedule. He is not forced to switch the context based on the the coders schedule

The reviewer can finish with his task first and only then start with the review, reducing the amount of required context-switches to a minimum.

3) Not done consistently

Imagine you are starting to work on a new task.

At the beginning you start to investigate the existing code base to figure out which of the existing methods and classes can be reused.

Let’s assume that a lot of the required functionality is already there and you only have to put the pieces together by calling a couple of methods.

In the end there are just a few lines of code that you had to change to achieve the goal of your task.

Then you wonder whether it is really necessary to ask your colleague for a review. The code changes are actually so simple that it is almost impossible that a review might lead to any improvements.

Therefore, you decide to skip the review and just go ahead and commit the changes. You save yourself and also your colleague some time.

This is a natural and understandable behaviour, but it is dangerous.

How do you decide when a code change is simple enough so it does not require a review?

It is very hard to define rules for that.

Review everything

So the best rule is to just review everything, no matter how small and simple the code changes are.

But even then you most likely have a problem that people are not going to follow this “review everything” rule.

It is just additional effort to get a reviewer to your desk for such a small code change. You have to ask your colleague whether he has time for a review, wait for him to roll over, explain what you were doing, etc.

The barrier for performing a review is actually quite high when using a synchronous code review approach.

In contrast, when the team uses an asynchronous code review approach, then it is quite easy to create a pull request and assign it to your colleague. So requesting a code review is way simpler when you use an asynchronous review approach.

Therefore, the likelihood that a code review actually happens is also higher when using an asynchronous code review compared to a synchronous one.

This argument is implicitly also supported by a survey conducted with 550 developers in 2017.

One finding of this survey was that developers who use a code review tool are four times more likely to review on a daily basis than those using a meeting-based approach.

This result is definitely not a perfect proof for the claim that asynchronous reviews happen more often compared to synchronous ones. But I think you can safely assume that you need additional review tools for an asynchronous review type to give feedback to the code. In contrast, for a synchronous review you don’t need that much tooling, although it might help.

Therefore, the likelihood that code reviews are done consistently is higher for the asynchronous approach compared to the synchronous one.

4) Time pressure

Let’s assume your team is using synchronous reviews and your coworker just finished his task and asked you to perform a review.

So you join him at his desk and he starts to explain about the task and the complex problems he had to solve as part of his task.

As he was spending the last couple of hours to analyze that problem and to find a solution he has a lot of insights and knowledge about that task. Therefore the explanations are very sharp and right to the point.

But it is quite difficult for you as a reviewer to follow. You didn’t invest the last couple of hours to analyze the problem. You didn’t put that much thought into finding a good solution.

Therefore it is very hard for you to decide within a few moments whether the presented solution is the best or not.

But your colleague wants to get his code approved as soon as possible.

This puts you, as the reviewer, in a situation of pressure. On the one hand you want to take the time to think the solution through because you want to validate, whether it is really a good solution. But on the other hand you want to approve the solution so your colleague can move on.

In this situation you are not encouraged to question the solution too much. Even if you don´t understand exactly how the new code works and why it has been implemented that way, you are not going to ask too many questions.

Why? There are three reasons for this:

Trust in coder

First of all your coworker might explain his solution in a very self-confident manner. This builds trust and you might think, that he knows what he is doing. He was working on this for the last couple of hours and the solution seems to be solid and sound.

However, trust is good in a team in general, but not for a code review.

Actually the whole point of a code review is to distrust your colleague and double check everything to make sure the code is really doing what it is supposed to do.

Avoidance to ask questions

Second of all, you as a reviewer don´t want to ask too many questions, because it might look like that you are not able to follow the explanations—or you are just too slow to understand what is going on.

Therefore it is easier to not ask too many questions, because you don´t want to look like a fool.

Avoidance to review same code lines multiple times

Third of all, you don’t want to look through the same piece of code multiple times, when the coder is sitting next to you. Even if you review a complex algorithm that is hard to understand when looking at it for the first time, you are not going to switch back and forth between different code files for a couple of minutes while the coder is waiting next to you.

Because then it might appear that you are not able to grasp the solution. And your mind is just to slow to understand the problem and figure out the solution.

So what happens is that you are not going to look too close into validating the solution. You don’t have the time to think everything through, because of the pressure of having the coder sitting next to you.

Naturally, this results in worse code quality because the review is performed under time pressure and therefore you might miss some potential issues in the code.

In contrast, with an asynchronous review you work on your own schedule and are not under time pressure. You can take as long as you need to understand and review the solution. Therefore it is more likely that you find the bugs before you approve the code.

5) Lack of focus

Let’s say you are in the middle of coding—working on a new complex feature and you are totally focused to implement this new algorithm you have in your mind.

While you are highly concentrated your team mate next to you just finished his task and asks you for a review of his code.

Of course, you are not happy about this.

But you cannot decline the request because your colleague is waiting. He is blocked until you review  his code. Only when the code is approved and committed he can start to pick up the next task.

As you are a nice person you will roll over with your chair to your teammate and do the review.

But you are frustrated.

You want to get back to your own work as soon as possible.

And this feeling has an impact on the quality of code review you perform. You want to get it done as fast as possible, so you can continue with your complex algorithm.

Therefore you don´t look too close and just do a quick and dirty review without putting a lot of effort in it.

This has the effect that you might miss some defects in the code, which you normally would have spotted.

As a matter of fact nobody likes to get interrupted while working on a complex task that requires highest focus and attention.

However, when the team performs asynchronous code reviews you don’t have these problems.

You perform the review when it fits your own schedule, when you have the time and the full mental capacity, then you can put all your attention to the code to review.

How to fix these problems?

Ok, so now I hope I convinced you that synchronous code reviews are bad, if they are used as the default code review type within a team.

There are of course some cases where the synchronous type works best, as I described here, but these cases should not be the norm.

As I mentioned at the beginning my team uses at the time of this writing the synchronous approach as the default review type as well. And we do face all the above mentioned problems.

What are we doing to solve these issues?

Well, we are in the process of moving our codebase to git. With git in place and with the tooling provided by TFS we will be able to use asynchronous code reviews.

I am really looking forward to that, but it will still take some time to prepare the move.

Ok, that’s it for today. Let me know in the comments section if you have any remarks. Take care and HabbediEhre!

4 Types Of Code Reviews Any Professional Developer Should Know About

Every professional software developer knows that a code review should be part of any serious development process. But what most developers don´t know is that there are many different types of code reviews. And each type has specific benefits and downsides depending on your project and team structure.

In this post I am going to list the different types of code reviews and explain how each type works exactly. I am also going to give you some ideas on when to use which type.

Ok, let’s get started.

Here we go.

First of all, on a very high level you can classify code reviews in two different categoriesformal code reviews and lightweight code reviews.

Formal code review

Formal code reviews are based on a formal process. The most popular implementation here is the Fagan inspection.

There you have a very structured process of trying to find defects in code, but it is also used to find defects in specifications or designs.

The Fagan inspection consists of six steps (Planning, Overview, Preparation, Inspection Meeting, Rework and Follow-up). The basic idea is to define output requirements for each step upfront and when running through the process you inspect the output of each step and compare it to the desired outcome. Then you decide whether you move on to the next step or still have to do work in the current step.

Such structured approach is not used a lot.

Actually in my career I have never came across a team that used such a process and I don’t think I will ever be able to see that.

I think it is because of the big overhead that this process brings with it and therefore not a lot of team make use of it.

However, if you have to develop software that could cause the loss of life in case of a defect, then such a structured approach for finding defects makes sense.

For example if you develop software for nuclear power plants then you probably want to have a very formal approach to guarantee that there is no bug in the delivered code.

But as I said, most of us developers are working on software that is not life-threatening in case of a bug.

And therefore we use a more lightweight approach for code reviews instead of the formal approach.

So let’s have a look at the lightweight code reviews:

Lightweight code reviews

Lightweight code reviews are commonly used by development teams these days.

You can divide lightweight code reviews in following different sub categories:

  1. Instant code review—also known as pair programming
  2. Synchronous code review—also know as over-the-shoulder code review
  3. Asynchronous code review—also known as tool-assisted code review
  4. Code review once in a while—also known as meeting-based code review

Type 1: Instant code review

The first type is the instant code review, which happens during pair programming. While one developer is hitting the keyboard to produce the code the other developer is reviewing the code right on the spot, paying attention to potential issues and giving ideas for code improvement on the go.

Complex business problem

This type of code review works well when you have to solve a complex problem. By putting two heads together to go through the process of finding a solution you increase the chance to get it right.

Having two brains thinking about the problem and discussing possible scenarios it is more likely that you also cover the edge cases of the problem.

I like to use pair programming when working on a task which requires a lot of complex business logic. Then it is helpful to have two people think through all the different possibilities of the flow and make sure all are handled properly in the code.

In contrast to complex business logic, you sometimes also work on a task, that has a complex technical problem to solve. Here I mean for instance you make use of a new framework or explore a piece of technology you never used before.

In such a situation it is better to work by yourself because you can work on your own base. You have to do a lot of searching on the web or reading documentation on how the new technology works.

It is not helpful to do pair programming in a such a case, because you hinder each other while getting the required knowledge.

However, if you get stuck then talking to a colleague about the solution often helps you to view the problem from a different angle.

Same level of expertise

Another important aspect to consider when doing pair programming is the level of expertise of the two developers working together.

Preferably both developers should be on the same level because then they are able to work along in the same speed.

Pair programming with a junior and senior does not work very well. If the junior has the steering wheel then the senior next to him just gets bored because he feels everything is just too slow. In such a setting the potential of the senior gets restricted and therefore is a waste of his time.

If the senior has the keyboard in his hand then everything goes to fast for the junior. The junior is not able to follow the base of the senior and after a few minutes he loses the context.

Only if the senior slows down and makes sure he explains to the junior on a slower pace what he is about to do, then this setup makes sense.

However, then we are not talking about pair programming anymore. Then we are talking about a learning session, where the senior developer teaches the junior developer how to solve a specific problem.

But if both developers are on the same level then it is amazing how much work they can accomplish in such a setting. The big benefit here is also that the two developers motivate each other and in case of one of them loses focus the other developer brings him back on track again.

To sum it up: pair programming works well when two developers with a similar level of experience work together on solving a complex business problem.

Type 2: Synchronous code review

The second type is the synchronous code review. Here the coder produces the code herself and asks the reviewer for a review immediately when she is done with coding.

The reviewer joins the coder at her desk and they look at the same screen while reviewing, discussing and improving the code together.

Lack of knowledge of reviewer

This type of code review works well when the reviewer lacks knowledge about the goal of the task. This happens when the team does not have refinement sessions nor proper sprint planning sessions together, where they discuss each task upfront.

This usually results in the situation where only a specific developer knows about the requirements of a task.

In these situations it is very helpful for the reviewer to get an introduction about the goal of the task before the review is started.

Lots of code improvements expected

Synchronous code reviews also work well if there are a lot of code improvements expected due to the lack of experience from the coder.

If an experienced senior is going to review a piece of code that has been implemented by a very junior guy, then the review generally works way faster when they do the improvements together after the junior claims he is done.

Downside of forced context switching

But there is a major downside of synchronous code reviews, which is the fact of forced context switches. This is not only very frustrating for the reviewer, but slows down the whole team.

In fact, I have written a separate blog post about the 5 major problems with synchronous code reviews. Therefore I don´t get into more details about that type of code review here.

Type 3: Asynchronous code review

Then we have the third type, the asynchronous code review. This one is not done together at the same time on the same screen, but asynchronously. After the coder is finished with coding, she makes the code available for review and starts her next task.

When the reviewer has time, he will review the code by himself at his desk on his own schedule, without talking to the coder in person, but writing down comments using some tooling.

After the reviewer is done, the tooling will notify the coder about the comments and necessary rework. The coder is going to improve the code based on the comments, again on his own schedule.

The cycle starts all over by making the changes available for review again. The coder changes the code until there are no more comments for improvement. Finally, the changes are approved and committed to the master branch.

As you can see synchronous code reviews work quite differently compared to asynchronous ones.

No direct dependencies

The big benefit of asynchronous code reviews is that they happen asynchronously. The coder does not directly depend on the reviewer and both of them can do their part of the work on their own schedule.

Downside of many review cycles

The downside is that you might have many cycles of reviews, which might spread over a couple of days until the review finally is approved.

When the coder is done, it usually takes a couple of hours until the reviewer starts to review. Most of the time the suggestions made by the reviewer are then fixed by the coder only the next day.

So the first cycle already takes at least a day. If you have a couple of those cycles then the reviewing time spans over a week—and this is not even taking the time for coding and testing into account.

But there are options to prevent this long timespan to get out of hand. For instance, in my team we made the rule that every developer starts with pending reviews in the morning before he picks up any other task. And the same thing is done after lunch break.

As the developer is out of his context anyway after a longer break you don´t force unnatural context switching and still have the benefit of getting the code reviewed in a reasonable time.

Comparing the benefits and downsides of this type of code review I think that asynchronous code reviews should be the default type for every professional development team.

But before I tell you why I think that way, let’s have a look at the 4th type of code reviews.

Type 4: Code review once in a while

A long time ago I used to do code review sessions about once every month together with the whole team. We were sitting in a meeting room and one developer was showing and explaining a difficult piece of code he has recently been writing.

The other developers were trying to spot potential issues, commenting and giving suggestions on how to improve the code.

I don’t think that any teams use the once-in-a-while code review method on a permanent basis. I can only think of one situation when this type could makes sense: when the whole team has no experience at all with code reviews, then getting everyone together in a room and do the review together a couple of times might help everyone to understand the purpose and the goal of a code review.

However on a long-term the 4th type this is not an adequate technique, because it is rather inefficient having the whole team working through a piece of code.

Ok, now we have covered all types of code reviews.

So, now you might wonder which type you should choose.

Which code review type should I pick?

We talked about the formal type, which is obviously not so popular and hardly used in practice..

Then we talked about the category of lightweight code reviews and distinguished 4 different types.

Type 1, the instant code review, is done in pair programming and works well when two developers with a similar skill set are working on a complex business problem.

Type 2, the synchronous code review, works well when the reviewer lacks knowledge about the goal of the task and needs explanation by the coder. It also works well if there are a lot of code improvements expected due to the lack of experience from the coder.

But it has the downside of forced context switches, which is frustrating for the reviewer and slows down the whole team.

Type 3, the asynchronous code review, prevents the problem of forced context switching and works well for the most common use cases.

Type 4, the once-in-a-while code review is not a permanent option for a professional team and may be used only to get a team started with code reviews.

Use asynchronous reviews by default

I think that a professional team should have the asynchronous code review in place as the default type because it prevents a lot of drawbacks compared to the synchronous review.

The synchronous review can be used in case the reviewer is not able to make sense of the changes made by the coder. But in such a situation the reviewer is going to ask the coder anyway for additional clarification. And if you work in a team these situations should hardly occur.

In case you don’t have a real team and work as a group of people, then the synchronous code review makes sense. If the reviewer does not know at all what you were working on the last couple of days, then it makes sense to give a proper explanation before you do the review together.

Switching to pair programming makes sense if you have two developers with a similar skill set and work on a complex business problem. But usually a team consists of people with multiple levels of experience and it does not work on complex business problems all the time. Most of the time you have common tasks with average complexity.

Therefore, the best choice for a professional team is to use the asynchronous review by default and switch to the synchronous type or pair programming if necessary.

Ok, that´s it for today.

What type of code review does your team use? Do you know of another type of code review, which I missed here? Then please let me know in the comments.

Talk to you next time. Take care and HabbediEhre.

Scrum of Scrums

The Scrum of Scrums is a meeting on inter-team level with the purpose to surface dependencies between teams and align their collaboration.

In this article I would like to give you insights on how we implemented Scrum of Scrums at LeaseWeb.

I am going to show you a practical example.

I am not going to tell you about the theory behind it, because you can find such information in other places on the Internet.

I want to give you some practical insights on how Scrum of Scrums can be implemented, what tools we are using and the benefits and downsides we get out of it.

However, before we start, I would like to explain in a few sentences what Scrum of Scrums actually is.

Scrum of Scrums

The Scrum of Scrums has a similar purpose as the daily standup in a Scrum team. The difference is that the Scrum of Scrums is done on an inter-team level. This means, that a representative of each Scrum team is joining the Scrum of Scrums.

Similar as in the daily standup of a Scrum team, in the Scrum of Scrums each team representative has to answer three questions:

  • What impediments does my team have that will prevent them from accomplishing their Sprint Goal (or impact the upcoming release)?
  • Is my team doing anything that will prevent another team from accomplishing their Sprint Goal (or impact their upcoming release)?
  • Have we discovered any new dependencies between the teams or discovered a way to resolve an existing dependency?

If you want to learn more about the Scrum of Scrums, then I recommend to have a look at the Scrum@Scale guide. That´s also the place where the three questions come from.

The Scrum@Scale guide has been published recently (February 2018) by one of the fathers of Scrum, Jeff Sutherland.

This is definitely a good starting point to learn more about scaling Scrum.

Ok, let´s have a look on how we implemented Scrum of Scrums at LeaseWeb.

Scrum of Scrums implementation

I have been working at LeaseWeb for a couple of years. To give you a more practical insight, I would like to explain how we implemented the Scrum of Scrums at LeaseWeb.

12 Scrum Teams

At LeaseWeb there are 12 Scrum teams. Each team is responsible for a certain service or product and naturally there are often dependencies between the teams.

For instance, when building a new feature a couple of teams might be involved and have dependencies with each other. To realize the new functionality several teams have to make changes in their product. Therefore close collaboration is required. And that´s why Scrum of Scrums has been implemented.

Big board in the hallway

We have a big board in the hallway, where everyone can see it when walking by. So, if people from other departments are interested what the development teams are currently working on, they can have a look at this board.

And the board is really big, actually it is a whole wall. If you want to know the dimensions, then I would say it is about three meters high and six meters wide. It is quite an impressive view when you enter the floor of the development department.

The board is basically a big table with columns and rows. There are columns for the current sprint n, the next sprint n+1, sprint n+2 and sprint n+3. In addition there are also columns for the next  months and the next quarter.

While the columns are used to display the time, each team gets a separate row in the table and fills it with cards.

The cards

The cards contain information on a high level on what each team is doing the current sprint as well as what the teams are planning to do in the upcoming sprints.

The level of detail is higher in the current sprint and obviously decreases the more you plan in the future.

So it is totally fine to just have one card in the next quarter column, which has for instance promo written on it. This should indicate that the team plans to work on a promotion feature in the next quarter. But it is important to understand that this is the high-level plan from todays perspective. It is not written in stone and it might change – in fact it is very likely to change, because it is very difficult to plan ahead for such a long time.

However, the team should have a more detailed plan on what they are working in the upcoming sprint, especially if they have dependencies with other teams. So they can coordinate and resolve those dependencies as good as possible.

But even though the plans might change in the future, it is good to have the card on the board, because it triggers discussions with other teams and stakeholders on what is the most important thing to work on and where do we have inter-team dependencies.

The cards are magnetic and stick on the board. You can write on them with a whiteboard marker and after cleaning you can reuse them again. We also use different colors for different projects or epics. You can find those magnetic cards on Amazon and also the lines you can use build the table on the board can be found here.

 The Scrum of Scrums meeting

The Scrum of Scrums meeting itself happens once a week and is timeboxed to 15min.

A representative of each team explains what the team is doing in the current sprint and the plans for the upcoming sprint – answering the three questions, which I mentioned above. As there are 12 teams it is crucial to keep it short and don´t get into details.

In general, the focus during the explanations are the dependencies with other teams. All other details are left out.

In case there are no dependencies with other teams, then the most important goals of the team are mentioned. I usually name the sprint goal of my team there as well. Nonetheless, as I already said, the key is to keep it short.

If there are any questions during the explanation, then those are answered right on the spot, if the answer is short. In case the answer needs a more detailed explanation, then the answer is postponed after the Scrum of Scrums and people, who are interested stay after the end of the meeting to hear and discuss the details.

If the answer triggers a discussion during the Scrum of Scrums, then any of the present Scrum Masters is allowed to interrupt the discussion, and asks the participants to discuss the topic after the meeting. That´s how it is possible to keep the timebox of 15min – even though there are 12 teams.

Benefits and downsides

Let´s start with the downsides of having a Scrum of Scrums meeting.

Well, everyone is busy with his own work and has a tight schedule and there comes another meeting what you have to attend. And even though it is just 15min you need some time to prepare the board. In addition, you need to interrupt your normal work and therefore also loose some focus time, because of the content switch – before and after the meeting.

    But next to that there are also quite some benefits of having the Scrum of Scrums in place.

First of all, having this formal process in place, which forces you to think and explain what you are working on and what you are planning to do.

    And just by that you might encounter impediments with other teams earlier. So you are able to work on resolving them, before those impediments become a blocker within your team. This is going to save your team a lot of hassle, nerves and time.
    Another great benefit is that you hear what other teams are working on.

And you might have better insights why they are not able to work on the tasks your own team depends on.This gives you a better understanding of the overall direction the whole department is going and the goals the teams are working towards.

The Scrum of Scrums also fosters collaboration between teams.

I recall multiple occasions that after the meeting I went to other teams to ask for their help or somebody came to me with information I didn´t know yet.

For instance, my team planned to work on a new backup solution, which I explained briefly during the meeting. Afterwards another Scrum Master poked me and told me that his team already has something similar in place.

Finally, the two teams ended up working together on a solution and with the knowledge provided from the other team it was way easier for my team to implement a good solution.Without the Scrum of Scrums we would have probably worked on it by ourselves and present the solution after the sprint. And only then we would have figured out that we already had the knowledge and infrastructure in the department and we just did the work twice.

Closeup

In this article I gave you an overview on how we implemented Scrum of Scrums at LeaseWeb. I have explained what tools we are using and what benefits and downsides we get out of it.

Do you have Scrum of Scrums in place in your company as well? What are the differences compared to the implementation at LeaseWeb?

Or you don´t have Scrum of Scrums in place in your company yet and you plan to implement it? Let me know how it works!

Ok, that´s it for today. Stay tuned and HabbediEhre!

Understanding Docker with Visual Studio 2017 – Part 2

Reading Time: 7 minutes

In Part 1 of the series “Understanding Docker with Visual Studio 2017” I described what you need to prepare to get docker up and running on your Windows machine.

In this part 2 I´m going to explain how to use Docker with a .Net project using Visual Studio 2017.

I am going to describe each file, which is created by Visual Studio for the docker integration and what commands are executed by VS, when you start your application within a docker container.

At the end of this blog post you should have a better understanding about what Visual Studio is doing for you in the background and how you can debug a docker application using those tools.

I was working on a sample project called TalentHunter, which I put on github. I´m going to use that project to explain the steps. Feel free to download that project from my github page.

Adding Docker support

It is very easy to add docker support to your .Net project.

Just create a new web or console application using Visual Studio 2017 and then right-click on the web or console project -> Add -> Docker Support.

Voila, you have docker support for your project. That´s how easy it is!

By doing this, Visual Studio creates a bunch of files, which I´m going to explain here in more detail:

  • dockerfile
  • docker-compose.yml
  • docker-compose.override.yml
  • docker-compose.ci.build.yml
  • docker-compose.dcproj
  • .docker-ignore

Let´s look at each file and figure out what it contains.

dockerfile
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WaCore.TalentHunter.Api.dll"]

The dockerfile contains information, which is used when BUILDING a new docker image. It is NOT used when starting the container – this can be confusing, but it is important to understand the difference.

In the specific case above, the image is derived from the official microsoft image for Asp.Net Core 2.0. It expects an argument with the name “source” when the docker-compose command is executed.

It defines “/app” as working directory and exposes port 80 to the outside world.

When building the image it copies the content from the path specified in the source argument to the current directory within the container. If there is no source argument specified, the contents from the path obj/Docker/publish are used.

When the container is started, it will execute the command dotnet WaCore.TalentHunter.Api.dll to start the web application.

docker-compose.yml

An application usually consists of multiple containers (eg. frontend, backend, database, etc.), but to keep it simple I have only one container in this example.

The docker-compose.yml looks as follows:

version: '3'
 
services:
  wacore.talenthunter.api:
    image: herbertbodner/wacore.talenthunter.api
    build:
      context: ./WaCore.TalentHunter.Api
      dockerfile: Dockerfile

The concept of a docker service is to define the set of images, which are required to run an application. As already mentioned this simplified example has only one image, which is named herbertbodner/wacore.talenthunter.api.

The last two lines in the file define where the dockerfile can be found, which is within the subdirectory WaCore.TalentHunter.Api and is named Dockerfile.

docker-compose.override.yml
version: '3'
 
services:
  wacore.talenthunter.api:
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
    ports:
      - "80"
networks:
  default:
    external:
      name: nat

This file completes the configuration information together with the docker-compose.yml file. It sets an environment variable ASPNETCORE_ENVIRONMENT and publishes port 80.

Next to that it also defines some basic networking configuration.

This docker-compose.override.yml file is used together with the docker-compose.yml (as we will see in a moment below).

docker-compose.ci.build.yml

If you have ever set up a traditional continuous integration/delivery pipline, you probably know that you usually have to install a couple of tools on your build server to enable a successful build of your application (e.g. build engine, 3rd-libraries, git client, sonar client, etc)

The idea of docker-compose.ci.build.yml file is to build an image, which has all necessary dependencies installed and are required to build your application. This means, that we can also create a Docker image for our build server.

However, we are not going to make use of this file in the remainder of this blog post, therefore it is out of scope. Just keep in mind that you can also dockerize your build server like this.

docker-compose.dcproj

This file is the project file for docker and contains relevant information for docker, like the version, where to find the configuration files etc.

.docker-ignore
*
!obj/Docker/publish/*
!obj/Docker/empty/
!bin/Release/netcoreapp2.0/publish/*

The .docker-ignore file contains a list of files, which should be ignored by docker.

As you can see in the file, everything is ignored (indicated by the *), except the three folders indicated in the last three lines.

Running the application with Docker

You can easily build an run the application by setting the docker-compose project as the startup project and hitting F5, or by right-clicking the docker-compose project and selecting “Build” in the dropdown menu.

Docker - Build Image

When you run the docker-compose project you will see in the output window (build section) the commands, which are executed by Visual Studio.

There are some commands, which are important in general, but not so important in order to understand what is going on. For instance, there are some commands to check, whether the container is already running and if so, the container is killed etc.

“docker-compose up” command

Nevertheless, the important command, which is executed by Visual Studio when hitting F5 is the docker-compose up command with the following parameters:

<strong>docker-compose</strong> <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\<strong>docker-compose.yml</strong>" <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\<strong>docker-compose.override.yml</strong>" <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\obj\Docker\<strong>docker-compose.vs.debug.g.yml</strong>" <strong>-p dockercompose9846867733375961963 up</strong> -d --build

The docker-compose up command builds an image and starts the container.

Let´s have a closer look to the command and it´s parameters:

The -d and –build parameters at the end indicate to start the container in detached mode and force a build. The -p parameter gives it a certain project name.

Then there are three configuration files specified with the -f parameter: docker-compose.yml, docker-compose.override.yml and docker-compose.vs.debug.g.yml.

All of them together contain the information on how to build the docker image. We already looked at the first two files a moment ago. The last configuration file (docker-compose.vs.debug.g.yml) is generated by visual studio itself and it is different depending on the build mode (debug or release).

In release mode a file named docker-compose.vs.release.g.yml is created, while in debug mode the file docker-compose.vs.debug.g.yml is used.

So let´s have a closer look at those two yml files and see the differences.

docker-compose.vs.release.g.yml
version: '3'
 
services:
  wacore.talenthunter.api:
    build:
      args:
        source: obj/Docker/publish/
    volumes:
      - C:\Users\hb\onecoremsvsmon\15.0.26919.1:C:\remote_debugger:ro
    entrypoint: C:\\remote_debugger\\x64\\msvsmon.exe /noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn /nowowwarn /timeout:2147483646
    labels:
      .....

When you build your application in Visual Studio in release mode, then two main things happen:

First of all your application is built and the build output is copied to the output folder “obj/Docker/publish/“.

Then your container is built using the docker up command with above configuration file docker-compose.vs.release.g.yml. In the configuration file you can see the build argument source, which points to the same folder, where the build output of our application is (“obj/Docker/publish/”). This argument is used in the dockerfile to copy all the content from that folder to the docker container (check again the code in the dockerfile, which we had a look at a moment ago).

On top of that a volume mount is created to the debugging tool msvsmon, but I´ll dive a bit deeper into that in a second.

docker-compose.vs.debug.g.yml

When you build your application in Visual Studio in debug mode, then the docker-compose.vs.debug.g.yml file is created by Visual Studio and used as an input to build the container. That configuration file looks as follows:

version: '3'
 
services:
  wacore.talenthunter.api:
    image: herbertbodner/wacore.talenthunter.api:dev
    build:
      args:
        source: obj/Docker/empty/
    environment:
      - DOTNET_USE_POLLING_FILE_WATCHER=1
      - NUGET_PACKAGES=C:\.nuget\packages
      - NUGET_FALLBACK_PACKAGES=c:\.nuget\fallbackpackages
    volumes:
      - D:\_repo\TalentHunter-Api\src\WaCore.TalentHunter.Api:C:\app
      - C:\Users\hb\onecoremsvsmon\15.0.26919.1:C:\remote_debugger:ro
      - C:\Users\hb\.nuget\packages\:C:\.nuget\packages:ro
      - C:\Program Files\dotnet\sdk\NuGetFallbackFolder:c:\.nuget\fallbackpackages:ro
 
    entrypoint: C:\\remote_debugger\\x64\\msvsmon.exe /noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn /nowowwarn /timeout:2147483646
    labels:
      ....

While in release mode the application build output is copied to the docker image, in debug mode the build output is not copied to the image. As you can see in the configuration file above, the source argument points to an empty directory (“obj/Docker/empty/”) and therefore nothing is copied by the docker-compose up command to the docker image.

So what happens instead?

Well, instead of copying the files, a volume mount is created to the application project folder. Therefore the docker container gets direct access to the project folder on your local disk.

In addition to the project folder, there are three other volume mounts created: two mounts to give the docker container access to the NuGet packages, which might be necessary to run the application. And another mount to the folder, which contains debugging tools.

Let´s have a closer look at how debugging works with docker containers.

Debugging the application running in Docker with Visual Studio

As already mentioned, a volume mount is created from the image to the folder “C:\Users\hb\onecoremsvsmon\15.0.26919.1\”. That folder contains the debugging tools, which come with Visual Studio.

When the container is started, then the msvsmon.exe file is executed on the container as well, because msvsmon.exe is defined as an entrypoint in the docker-compose.vs.debug.g.yml file (see above).

Then msvsmon.exe is interacting with Visual Studio and therefore we are able to set a breakpoint and debug the code as we wish.

If you want to know more how Visual Studio 2017 and Docker are working together, then I recommend following Pluralsight course: Introduction to Docker on Windows with Visual Studio 2017” by Marcel de Vries.

I love Pluralsight and if you are looking for a source with condensed information prepared in a structured way for you to consume, then I urge you  to look into https://www.pluralsight.com

CloseUp

In this post we saw how easy it is to add docker support to a .Net Core project using Visual Studio 2017.

Then we looked into all the different files, which are created automatically to figure out what they are used for.

Finally I explained what is happening in the background when you build a docker project with Visual Studio 2017 in release mode as well as in debug mode.

Now, I hope this getting-started-guide helps you to set up your own projects using docker. Let me know if anything is unclear or if you have any questions.

That´s it for today. See you around and HabbediEhre!

Understanding Docker with Visual Studio 2017 – Part 1

Reading Time: 5 minutes

Containerization of applications using Docker with Visual Studio 2017 is trendy, but it is not so easy to understand what is happening in the background.

Therefore, in this blog post I´m going to explain why using containers is beneficial and what a container or image is. Then we talk about how to set up Docker with Visual Studio on Windows 10.

In part 2 of this blog post series I´m going to dive deeper in an example project and explain the created Docker files and executed Docker commands, which are simplified with the Visual Studio Docker integration.

You can find part 2 here.

Ok, let´s start with the purpose of using containers.

Why using containers?

Although it is relatively complex to set up your whole infrastructure to use containers, it gives you a lot of benefits on the long run.

However, the usage of containers is not always the best approach for every project nor company.

Using containers is usually beneficial when you use Scrum performing a lot of deployments and you are using a Microservice architecture.

Let´s have a closer look to those two areas – Scrum with regular deployments and Microservice architecture.

Scrum with regular deployments

The value of Scrum is to deliver a “Done”, useable, and potentially releasable product Increment each sprint, so you can get feedback as soon as possible.

Therefore, it is necessary to ship your new features to your customers as soon as possible. This means that you want to deploy as often as possible.

As the deployment is done quite often, it should be as simple and effortless as possible. You don´t want your team to spend a lot of time to deploy new features. Therefore, it is wise to automate the deployment.

Then you can focus your main effort on building new features instead of deploying them.

Containers can help you to simplify the deployment of (or rollback to) different versions of your application to staging environments as well as the production environment.

This is especially true if your application consists of a lot of small services, which happens a lot these days, as a new architectural style in software development is widely adopted – the Microservice architecture.

Microservice architecture

Martin Fowler, a very well known figure in the software architecture world, explains in his article, that the Microservice architecture approach has become very popular during the last couple of years.

The main concept of the Microservice architecture is to split up your application in lots of small services, which talk to each other, instead of having one big monolithic application.

While both architectures (microservice as well as monolithic) have their benefits and downsides one important distinction is that you have to deal with a lot of services when using a Microservice architecture.

Therefore, a lot of tools have been created in the recent years to manage and deploy those vast amount of services in a coordinated and automated fashion.

One very popular tool for this is Docker.

What are containers?

But what are containers exactly? What is an image? What´s the difference to a virtual machine? And what´s docker?

Let´s have a look at these terms and make them more clear, before we dive deeper into how to install Docker on Windows.

Virtual machine vs Container vs Docker

Virtualization of computers has been widely adopted in the past two decades, because it has some big advantages.

Compared to dedicated servers, Virtual Machines (VMs) are easy to move around, from one physical server to another. It is easy to create backups of the whole virtual machine, or restore to a previous state (for instance before the update has been applied, which broke the application).

Containers are a bit similar to VMs, because we can also move containers around easily and create backups of different versions.

However, the main difference is that containers are much more lightweight compared to VMs.

While each VM has it´s own operating system installed, containers can share the same operation system.

The two main advantages of containers over VMs are that containers start up much faster (in milliseconds) and use less resources resulting in better performance.

To say it in one sentence: Containers are lightweight and portable encapsulations of an environment in which to run applications.

Ok, now we know about VMs and containers. How does Docker fit in here?

Docker is a company providing container technology, which is called Docker containers.

While the idea of containerization is around for quite some time and has also been implemented on Linux (called LXC), Docker containers introduced several significant changes to LXC that make containers more flexible to use.

However, when people talk about containers these days, then they usually mean Docker containers.

Image vs Container

There is a difference between an image and a container. The two are closely related, but distinct.

An image is an immutable file, that´s essentially a snapshot of a container. It´s build up from a series of read only layers.

To use a cooking methaphor: if an image is a recipe, a container is the cake. Or using a programming methaphor: if an image is a class, a container is an object or instance of the class.

You can create multiple instances of an image, therefore having multiple containers based on the same image.

Windows Containers vs Linux Containers

You can run Docker containers on Linux as well as on Windows.

However, on Windows native Docker support is only provided on the newer versions of Windows: they are supported on Windows 10 and on Windows Server 2016 (and on Azure of course).

As we already mentioned above, containers share the operating system kernel. Therefore, we have to distinguish between Linux containers and Windows containers, because they target different kernels (Windows kernel vs Linux kernel).

However, on Windows 10 for example, we can run Windows containers as well as Linux containers.

Why and how that works, I´m going to explain in a minute.

But at first, let´s set up Docker on Windows 10.

Setting up Docker on Windows 10

This is basically very simple and just consists of two steps, which you can read up on the documentation pages of Docker. But what is missing on the Docker documentation pages is some additional information on what is happening in the background.

That´s why I´m trying to give more insights on that here.

CPU Virtualization must be enabled

Before you install anything else, make sure that CPU virtualization is enabled in your bios. At first I didn´t check this on my Laptop and got very weird errors. It took me some time to google the error “Unable to write to the database” and figure out that the problem was due to CPU virtualization was not enabled.

So make sure this is configured on your machine before you proceed.

Install Docker for Windows

Next you have to install Docker for Windows.

When you do this a couple of things happen in the background.

First of all, the Hyper-V Windows feature is enabled on your machine. This feature is a hypervisor service, which allows you to host virtual machines.

Then a new virtual machine (VM) is created. This VM runs a MobyLinux operating system and is hosted in Hyper-V. If you open the Hyper-V manager after you installed Docker for Windows, you will see something like this:

Docker HyperVManager

By default the virtual hard disk of this VM is stored on:

C:\Users\Public\Documents\Hyper-V\Virtual hard disks

This MobyLinux VM has Docker host installed and every Linux Docker container we are going to start up, will run on top of this VM.

Every Windows Docker container however, will run natively on your Windows 10 machine.

Now as you have Docker installed on your local machine, let´s have a look at how Visual Studio is integrated with Docker.

In part 2 I´m going to run through an example application for Docker with Visual Studio 2017 and explain each file, which is created by Visual Studio when adding Docker support.

I´m also going to explain the commands, which are executed by VS, when you start or debug your application within a Docker container.

You can find part 2 here.

Stay tuned and HabbediEhre!

Netherlands biggest interest in Scrum worldwide

How interested are people in Scrum? Is it´s popularity growing or declining? What about the interest in Agile or DevOps?

A few days ago I stumbled upon the website from google, called https://trends.google.com, which tells you what people search for on the Internet using the Google search engine.

It´s amazing how the Internet works today. Google alone currently receives about 60.000 search requests per second! Yes, per second!

I was actually curious to see what people, who are interested in Agile and Scrum, are looking for.

So I clicked a bit around and found 3 interesting results, which I would like to share with you:

“Scrum” most popular in the Netherlands

It turns out that people from the Netherlands searched for “Scrum” the most compared to any other country in the world. I looked at a time range of the previous 12 months (November 2016 to October 2017).

Google Trends: Scrum

The numbers are calculated on a scale from 0 to 100, and represent a search interest relative to the highest point (for the chart in the middle of the picture) or relative to the location with the most popularity for the search term (for the list on the end of the picture).

A higher value means a higher proportion of all queries, not a higher absolute query count. This means, the numbers are relative to the size of a country and therefore you can compare a very small with a very big country easily.

The result is very interesting for me, because I (still) live in the Netherlands and I know that Scrum and Agile is big here.

But I didn´t know that people in the Netherlands are leading the ranking and have the highest interest in Scrum worldwide.

St. Helena and China are (to my surprise) on the second and third place in the ranking. I will revisit both of them in a minute.

If you compare the Dutchies to Switzerland and Germany, which are on the 4. and 5. place in the ranking, then the interest there is just about 50% compared to the Netherlands.

I expected the U.K. and the U.S. somewhere in the top places, but I found U.K on rank 20 and U.S. even on rank 25.

 

Another interesting result:

“Agile software development” most popular in St. Helena

If you are like me, then you might ask yourself: Where the f*ck is St. Helena?

I had to look it up in Google maps as well, it´s a small Island somewhere in the middle of the Atlantic Ocean. It probably sounds familiar to you as well, because this is the Island, where Napoleon was banned after he lost the Battle of Waterloo in 1815.

Anyway, there live around 4500 people on the Island and they are obviously crazy about Agile software development.

If you go there for holiday you probably find agile people all over the place. 🙂

Google Trends: Agile

As you can see in the graph, the popularity of “Agile software development” was quite stable throughout the year, but started to increase since August 2017 (until today November 2017).

So, either Agile software development is really getting more traction since the last 3 months, or google made changes in one of their algorithms.

This is just interesting for me, because the amount of page views on my blog increased as well. It actually follows pretty much the same pattern as above – quite stable throughout the year, and then a linear increase within the last 3 month.

So I thought, maybe the graph in google trends is actually caused by my blog?

Well. I wish 🙂

Anyway, here the last interesting result I found:

“DevOps” is very popular in China

Come on!

China?

That´s weird!

Google Trends: DevOps

If you look at the numbers, then the Chinese interest in DevOps is way ahead. St. Helena again, on second place, has only 55%. And then we have India on the third place with only 40%.

 

But anyway, I think Google trends is a very interesting and powerful tool. And it is really fast.

It gives you insights in what people are searching for on the Internet using the Google search engine.

And the cool thing is, that Google gives you access to this information for free.

So have a look at it and play around. There are quite some nice features to discover.

Btw: if you were wondering why the graph has its minimum always at the same time in all of the three graphs, then I can tell you that people don´t want to know about Scrum, Agile development or DevOps at the end of the year. 🙂

It seems, like everyone is at home celebrating Christmas and New Year, and spending no time searching the Internet for these topics.

But that´s happening not only for these three topics, but the amount of queries in general goes down during these days.

If you want to look into this further by yourself, here are actually the links to the queries I used in google trends:

Ok, that´s it for today. Stay tuned and HabbediEhre!