Scrum of Scrums

The Scrum of Scrums is a meeting on inter-team level with the purpose to surface dependencies between teams and align their collaboration.

In this article I would like to give you insights on how we implemented Scrum of Scrums at LeaseWeb.

I am going to show you a practical example.

I am not going to tell you about the theory behind it, because you can find such information in other places on the Internet.

I want to give you some practical insights on how Scrum of Scrums can be implemented, what tools we are using and the benefits and downsides we get out of it.

However, before we start, I would like to explain in a few sentences what Scrum of Scrums actually is.

Scrum of Scrums

The Scrum of Scrums has a similar purpose as the daily standup in a Scrum team. The difference is that the Scrum of Scrums is done on an inter-team level. This means, that a representative of each Scrum team is joining the Scrum of Scrums.

Similar as in the daily standup of a Scrum team, in the Scrum of Scrums each team representative has to answer three questions:

  • What impediments does my team have that will prevent them from accomplishing their Sprint Goal (or impact the upcoming release)?
  • Is my team doing anything that will prevent another team from accomplishing their Sprint Goal (or impact their upcoming release)?
  • Have we discovered any new dependencies between the teams or discovered a way to resolve an existing dependency?

If you want to learn more about the Scrum of Scrums, then I recommend to have a look at the Scrum@Scale guide. That´s also the place where the three questions come from.

The Scrum@Scale guide has been published recently (February 2018) by one of the fathers of Scrum, Jeff Sutherland.

This is definitely a good starting point to learn more about scaling Scrum.

Ok, let´s have a look on how we implemented Scrum of Scrums at LeaseWeb.

Scrum of Scrums implementation

I have been working at LeaseWeb for a couple of years. To give you a more practical insight, I would like to explain how we implemented the Scrum of Scrums at LeaseWeb.

12 Scrum Teams

At LeaseWeb there are 12 Scrum teams. Each team is responsible for a certain service or product and naturally there are often dependencies between the teams.

For instance, when building a new feature a couple of teams might be involved and have dependencies with each other. To realize the new functionality several teams have to make changes in their product. Therefore close collaboration is required. And that´s why Scrum of Scrums has been implemented.

Big board in the hallway

We have a big board in the hallway, where everyone can see it when walking by. So, if people from other departments are interested what the development teams are currently working on, they can have a look at this board.

And the board is really big, actually it is a whole wall. If you want to know the dimensions, then I would say it is about three meters high and six meters wide. It is quite an impressive view when you enter the floor of the development department.

The board is basically a big table with columns and rows. There are columns for the current sprint n, the next sprint n+1, sprint n+2 and sprint n+3. In addition there are also columns for the next  months and the next quarter.

While the columns are used to display the time, each team gets a separate row in the table and fills it with cards.

The cards

The cards contain information on a high level on what each team is doing the current sprint as well as what the teams are planning to do in the upcoming sprints.

The level of detail is higher in the current sprint and obviously decreases the more you plan in the future.

So it is totally fine to just have one card in the next quarter column, which has for instance promo written on it. This should indicate that the team plans to work on a promotion feature in the next quarter. But it is important to understand that this is the high-level plan from todays perspective. It is not written in stone and it might change – in fact it is very likely to change, because it is very difficult to plan ahead for such a long time.

However, the team should have a more detailed plan on what they are working in the upcoming sprint, especially if they have dependencies with other teams. So they can coordinate and resolve those dependencies as good as possible.

But even though the plans might change in the future, it is good to have the card on the board, because it triggers discussions with other teams and stakeholders on what is the most important thing to work on and where do we have inter-team dependencies.

The cards are magnetic and stick on the board. You can write on them with a whiteboard marker and after cleaning you can reuse them again. We also use different colors for different projects or epics. You can find those magnetic cards on Amazon and also the lines you can use build the table on the board can be found here.

 The Scrum of Scrums meeting

The Scrum of Scrums meeting itself happens once a week and is timeboxed to 15min.

A representative of each team explains what the team is doing in the current sprint and the plans for the upcoming sprint – answering the three questions, which I mentioned above. As there are 12 teams it is crucial to keep it short and don´t get into details.

In general, the focus during the explanations are the dependencies with other teams. All other details are left out.

In case there are no dependencies with other teams, then the most important goals of the team are mentioned. I usually name the sprint goal of my team there as well. Nonetheless, as I already said, the key is to keep it short.

If there are any questions during the explanation, then those are answered right on the spot, if the answer is short. In case the answer needs a more detailed explanation, then the answer is postponed after the Scrum of Scrums and people, who are interested stay after the end of the meeting to hear and discuss the details.

If the answer triggers a discussion during the Scrum of Scrums, then any of the present Scrum Masters is allowed to interrupt the discussion, and asks the participants to discuss the topic after the meeting. That´s how it is possible to keep the timebox of 15min – even though there are 12 teams.

Benefits and downsides

Let´s start with the downsides of having a Scrum of Scrums meeting.

Well, everyone is busy with his own work and has a tight schedule and there comes another meeting what you have to attend. And even though it is just 15min you need some time to prepare the board. In addition, you need to interrupt your normal work and therefore also loose some focus time, because of the content switch – before and after the meeting.

    But next to that there are also quite some benefits of having the Scrum of Scrums in place.

First of all, having this formal process in place, which forces you to think and explain what you are working on and what you are planning to do.

    And just by that you might encounter impediments with other teams earlier. So you are able to work on resolving them, before those impediments become a blocker within your team. This is going to save your team a lot of hassle, nerves and time.
    Another great benefit is that you hear what other teams are working on.

And you might have better insights why they are not able to work on the tasks your own team depends on.This gives you a better understanding of the overall direction the whole department is going and the goals the teams are working towards.

The Scrum of Scrums also fosters collaboration between teams.

I recall multiple occasions that after the meeting I went to other teams to ask for their help or somebody came to me with information I didn´t know yet.

For instance, my team planned to work on a new backup solution, which I explained briefly during the meeting. Afterwards another Scrum Master poked me and told me that his team already has something similar in place.

Finally, the two teams ended up working together on a solution and with the knowledge provided from the other team it was way easier for my team to implement a good solution.Without the Scrum of Scrums we would have probably worked on it by ourselves and present the solution after the sprint. And only then we would have figured out that we already had the knowledge and infrastructure in the department and we just did the work twice.

Closeup

In this article I gave you an overview on how we implemented Scrum of Scrums at LeaseWeb. I have explained what tools we are using and what benefits and downsides we get out of it.

Do you have Scrum of Scrums in place in your company as well? What are the differences compared to the implementation at LeaseWeb?

Or you don´t have Scrum of Scrums in place in your company yet and you plan to implement it? Let me know how it works!

Ok, that´s it for today. Stay tuned and HabbediEhre!

Understanding Docker with Visual Studio 2017 – Part 2

In Part 1 of the series “Understanding Docker with Visual Studio 2017” I described what you need to prepare to get docker up and running on your Windows machine.

In this part 2 I´m going to explain how to use Docker with a .Net project using Visual Studio 2017.

I am going to describe each file, which is created by Visual Studio for the docker integration and what commands are executed by VS, when you start your application within a docker container.

At the end of this blog post you should have a better understanding about what Visual Studio is doing for you in the background and how you can debug a docker application using those tools.

I was working on a sample project called TalentHunter, which I put on github. I´m going to use that project to explain the steps. Feel free to download that project from my github page.

Adding Docker support

It is very easy to add docker support to your .Net project.

Just create a new web or console application using Visual Studio 2017 and then right-click on the web or console project -> Add -> Docker Support.

Voila, you have docker support for your project. That´s how easy it is!

By doing this, Visual Studio creates a bunch of files, which I´m going to explain here in more detail:

  • dockerfile
  • docker-compose.yml
  • docker-compose.override.yml
  • docker-compose.ci.build.yml
  • docker-compose.dcproj
  • .docker-ignore

Let´s look at each file and figure out what it contains.

dockerfile
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WaCore.TalentHunter.Api.dll"]

The dockerfile contains information, which is used when BUILDING a new docker image. It is NOT used when starting the container – this can be confusing, but it is important to understand the difference.

In the specific case above, the image is derived from the official microsoft image for Asp.Net Core 2.0. It expects an argument with the name “source” when the docker-compose command is executed.

It defines “/app” as working directory and exposes port 80 to the outside world.

When building the image it copies the content from the path specified in the source argument to the current directory within the container. If there is no source argument specified, the contents from the path obj/Docker/publish are used.

When the container is started, it will execute the command dotnet WaCore.TalentHunter.Api.dll to start the web application.

docker-compose.yml

An application usually consists of multiple containers (eg. frontend, backend, database, etc.), but to keep it simple I have only one container in this example.

The docker-compose.yml looks as follows:

version: '3'
 
services:
  wacore.talenthunter.api:
    image: herbertbodner/wacore.talenthunter.api
    build:
      context: ./WaCore.TalentHunter.Api
      dockerfile: Dockerfile

The concept of a docker service is to define the set of images, which are required to run an application. As already mentioned this simplified example has only one image, which is named herbertbodner/wacore.talenthunter.api.

The last two lines in the file define where the dockerfile can be found, which is within the subdirectory WaCore.TalentHunter.Api and is named Dockerfile.

docker-compose.override.yml
version: '3'
 
services:
  wacore.talenthunter.api:
    environment:
      - ASPNETCORE_ENVIRONMENT=Development
    ports:
      - "80"
networks:
  default:
    external:
      name: nat

This file completes the configuration information together with the docker-compose.yml file. It sets an environment variable ASPNETCORE_ENVIRONMENT and publishes port 80.

Next to that it also defines some basic networking configuration.

This docker-compose.override.yml file is used together with the docker-compose.yml (as we will see in a moment below).

docker-compose.ci.build.yml

If you have ever set up a traditional continuous integration/delivery pipline, you probably know that you usually have to install a couple of tools on your build server to enable a successful build of your application (e.g. build engine, 3rd-libraries, git client, sonar client, etc)

The idea of docker-compose.ci.build.yml file is to build an image, which has all necessary dependencies installed and are required to build your application. This means, that we can also create a Docker image for our build server.

However, we are not going to make use of this file in the remainder of this blog post, therefore it is out of scope. Just keep in mind that you can also dockerize your build server like this.

docker-compose.dcproj

This file is the project file for docker and contains relevant information for docker, like the version, where to find the configuration files etc.

.docker-ignore
*
!obj/Docker/publish/*
!obj/Docker/empty/
!bin/Release/netcoreapp2.0/publish/*

The .docker-ignore file contains a list of files, which should be ignored by docker.

As you can see in the file, everything is ignored (indicated by the *), except the three folders indicated in the last three lines.

Running the application with Docker

You can easily build an run the application by setting the docker-compose project as the startup project and hitting F5, or by right-clicking the docker-compose project and selecting “Build” in the dropdown menu.

Docker - Build Image

When you run the docker-compose project you will see in the output window (build section) the commands, which are executed by Visual Studio.

There are some commands, which are important in general, but not so important in order to understand what is going on. For instance, there are some commands to check, whether the container is already running and if so, the container is killed etc.

“docker-compose up” command

Nevertheless, the important command, which is executed by Visual Studio when hitting F5 is the docker-compose up command with the following parameters:

<strong>docker-compose</strong> <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\<strong>docker-compose.yml</strong>" <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\<strong>docker-compose.override.yml</strong>" <strong>-f</strong> "D:\_repo\TalentHunter-Api\src\obj\Docker\<strong>docker-compose.vs.debug.g.yml</strong>" <strong>-p dockercompose9846867733375961963 up</strong> -d --build

The docker-compose up command builds an image and starts the container.

Let´s have a closer look to the command and it´s parameters:

The -d and –build parameters at the end indicate to start the container in detached mode and force a build. The -p parameter gives it a certain project name.

Then there are three configuration files specified with the -f parameter: docker-compose.yml, docker-compose.override.yml and docker-compose.vs.debug.g.yml.

All of them together contain the information on how to build the docker image. We already looked at the first two files a moment ago. The last configuration file (docker-compose.vs.debug.g.yml) is generated by visual studio itself and it is different depending on the build mode (debug or release).

In release mode a file named docker-compose.vs.release.g.yml is created, while in debug mode the file docker-compose.vs.debug.g.yml is used.

So let´s have a closer look at those two yml files and see the differences.

docker-compose.vs.release.g.yml
version: '3'
 
services:
  wacore.talenthunter.api:
    build:
      args:
        source: obj/Docker/publish/
    volumes:
      - C:\Users\hb\onecoremsvsmon\15.0.26919.1:C:\remote_debugger:ro
    entrypoint: C:\\remote_debugger\\x64\\msvsmon.exe /noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn /nowowwarn /timeout:2147483646
    labels:
      .....

When you build your application in Visual Studio in release mode, then two main things happen:

First of all your application is built and the build output is copied to the output folder “obj/Docker/publish/“.

Then your container is built using the docker up command with above configuration file docker-compose.vs.release.g.yml. In the configuration file you can see the build argument source, which points to the same folder, where the build output of our application is (“obj/Docker/publish/”). This argument is used in the dockerfile to copy all the content from that folder to the docker container (check again the code in the dockerfile, which we had a look at a moment ago).

On top of that a volume mount is created to the debugging tool msvsmon, but I´ll dive a bit deeper into that in a second.

docker-compose.vs.debug.g.yml

When you build your application in Visual Studio in debug mode, then the docker-compose.vs.debug.g.yml file is created by Visual Studio and used as an input to build the container. That configuration file looks as follows:

version: '3'
 
services:
  wacore.talenthunter.api:
    image: herbertbodner/wacore.talenthunter.api:dev
    build:
      args:
        source: obj/Docker/empty/
    environment:
      - DOTNET_USE_POLLING_FILE_WATCHER=1
      - NUGET_PACKAGES=C:\.nuget\packages
      - NUGET_FALLBACK_PACKAGES=c:\.nuget\fallbackpackages
    volumes:
      - D:\_repo\TalentHunter-Api\src\WaCore.TalentHunter.Api:C:\app
      - C:\Users\hb\onecoremsvsmon\15.0.26919.1:C:\remote_debugger:ro
      - C:\Users\hb\.nuget\packages\:C:\.nuget\packages:ro
      - C:\Program Files\dotnet\sdk\NuGetFallbackFolder:c:\.nuget\fallbackpackages:ro
 
    entrypoint: C:\\remote_debugger\\x64\\msvsmon.exe /noauth /anyuser /silent /nostatus /noclrwarn /nosecuritywarn /nofirewallwarn /nowowwarn /timeout:2147483646
    labels:
      ....

While in release mode the application build output is copied to the docker image, in debug mode the build output is not copied to the image. As you can see in the configuration file above, the source argument points to an empty directory (“obj/Docker/empty/”) and therefore nothing is copied by the docker-compose up command to the docker image.

So what happens instead?

Well, instead of copying the files, a volume mount is created to the application project folder. Therefore the docker container gets direct access to the project folder on your local disk.

In addition to the project folder, there are three other volume mounts created: two mounts to give the docker container access to the NuGet packages, which might be necessary to run the application. And another mount to the folder, which contains debugging tools.

Let´s have a closer look at how debugging works with docker containers.

Debugging the application running in Docker with Visual Studio

As already mentioned, a volume mount is created from the image to the folder “C:\Users\hb\onecoremsvsmon\15.0.26919.1\”. That folder contains the debugging tools, which come with Visual Studio.

When the container is started, then the msvsmon.exe file is executed on the container as well, because msvsmon.exe is defined as an entrypoint in the docker-compose.vs.debug.g.yml file (see above).

Then msvsmon.exe is interacting with Visual Studio and therefore we are able to set a breakpoint and debug the code as we wish.

If you want to know more how Visual Studio 2017 and Docker are working together, then I recommend following Pluralsight course: Introduction to Docker on Windows with Visual Studio 2017” by Marcel de Vries.

I love Pluralsight and if you are looking for a source with condensed information prepared in a structured way for you to consume, then I urge you  to look into https://www.pluralsight.com

CloseUp

In this post we saw how easy it is to add docker support to a .Net Core project using Visual Studio 2017.

Then we looked into all the different files, which are created automatically to figure out what they are used for.

Finally I explained what is happening in the background when you build a docker project with Visual Studio 2017 in release mode as well as in debug mode.

Now, I hope this getting-started-guide helps you to set up your own projects using docker. Let me know if anything is unclear or if you have any questions.

That´s it for today. See you around and HabbediEhre!

Understanding Docker with Visual Studio 2017 – Part 1

Containerization of applications using Docker with Visual Studio 2017 is trendy, but it is not so easy to understand what is happening in the background.

Therefore, in this blog post I´m going to explain why using containers is beneficial and what a container or image is. Then we talk about how to set up Docker with Visual Studio on Windows 10.

In part 2 of this blog post series I´m going to dive deeper in an example project and explain the created Docker files and executed Docker commands, which are simplified with the Visual Studio Docker integration.

You can find part 2 here.

Ok, let´s start with the purpose of using containers.

Why using containers?

Although it is relatively complex to set up your whole infrastructure to use containers, it gives you a lot of benefits on the long run.

However, the usage of containers is not always the best approach for every project nor company.

Using containers is usually beneficial when you use Scrum performing a lot of deployments and you are using a Microservice architecture.

Let´s have a closer look to those two areas – Scrum with regular deployments and Microservice architecture.

Scrum with regular deployments

The value of Scrum is to deliver a “Done”, useable, and potentially releasable product Increment each sprint, so you can get feedback as soon as possible.

Therefore, it is necessary to ship your new features to your customers as soon as possible. This means that you want to deploy as often as possible.

As the deployment is done quite often, it should be as simple and effortless as possible. You don´t want your team to spend a lot of time to deploy new features. Therefore, it is wise to automate the deployment.

Then you can focus your main effort on building new features instead of deploying them.

Containers can help you to simplify the deployment of (or rollback to) different versions of your application to staging environments as well as the production environment.

This is especially true if your application consists of a lot of small services, which happens a lot these days, as a new architectural style in software development is widely adopted – the Microservice architecture.

Microservice architecture

Martin Fowler, a very well known figure in the software architecture world, explains in his article, that the Microservice architecture approach has become very popular during the last couple of years.

The main concept of the Microservice architecture is to split up your application in lots of small services, which talk to each other, instead of having one big monolithic application.

While both architectures (microservice as well as monolithic) have their benefits and downsides one important distinction is that you have to deal with a lot of services when using a Microservice architecture.

Therefore, a lot of tools have been created in the recent years to manage and deploy those vast amount of services in a coordinated and automated fashion.

One very popular tool for this is Docker.

What are containers?

But what are containers exactly? What is an image? What´s the difference to a virtual machine? And what´s docker?

Let´s have a look at these terms and make them more clear, before we dive deeper into how to install Docker on Windows.

Virtual machine vs Container vs Docker

Virtualization of computers has been widely adopted in the past two decades, because it has some big advantages.

Compared to dedicated servers, Virtual Machines (VMs) are easy to move around, from one physical server to another. It is easy to create backups of the whole virtual machine, or restore to a previous state (for instance before the update has been applied, which broke the application).

Containers are a bit similar to VMs, because we can also move containers around easily and create backups of different versions.

However, the main difference is that containers are much more lightweight compared to VMs.

While each VM has it´s own operating system installed, containers can share the same operation system.

The two main advantages of containers over VMs are that containers start up much faster (in milliseconds) and use less resources resulting in better performance.

To say it in one sentence: Containers are lightweight and portable encapsulations of an environment in which to run applications.

Ok, now we know about VMs and containers. How does Docker fit in here?

Docker is a company providing container technology, which is called Docker containers.

While the idea of containerization is around for quite some time and has also been implemented on Linux (called LXC), Docker containers introduced several significant changes to LXC that make containers more flexible to use.

However, when people talk about containers these days, then they usually mean Docker containers.

Image vs Container

There is a difference between an image and a container. The two are closely related, but distinct.

An image is an immutable file, that´s essentially a snapshot of a container. It´s build up from a series of read only layers.

To use a cooking methaphor: if an image is a recipe, a container is the cake. Or using a programming methaphor: if an image is a class, a container is an object or instance of the class.

You can create multiple instances of an image, therefore having multiple containers based on the same image.

Windows Containers vs Linux Containers

You can run Docker containers on Linux as well as on Windows.

However, on Windows native Docker support is only provided on the newer versions of Windows: they are supported on Windows 10 and on Windows Server 2016 (and on Azure of course).

As we already mentioned above, containers share the operating system kernel. Therefore, we have to distinguish between Linux containers and Windows containers, because they target different kernels (Windows kernel vs Linux kernel).

However, on Windows 10 for example, we can run Windows containers as well as Linux containers.

Why and how that works, I´m going to explain in a minute.

But at first, let´s set up Docker on Windows 10.

Setting up Docker on Windows 10

This is basically very simple and just consists of two steps, which you can read up on the documentation pages of Docker. But what is missing on the Docker documentation pages is some additional information on what is happening in the background.

That´s why I´m trying to give more insights on that here.

CPU Virtualization must be enabled

Before you install anything else, make sure that CPU virtualization is enabled in your bios. At first I didn´t check this on my Laptop and got very weird errors. It took me some time to google the error “Unable to write to the database” and figure out that the problem was due to CPU virtualization was not enabled.

So make sure this is configured on your machine before you proceed.

Install Docker for Windows

Next you have to install Docker for Windows.

When you do this a couple of things happen in the background.

First of all, the Hyper-V Windows feature is enabled on your machine. This feature is a hypervisor service, which allows you to host virtual machines.

Then a new virtual machine (VM) is created. This VM runs a MobyLinux operating system and is hosted in Hyper-V. If you open the Hyper-V manager after you installed Docker for Windows, you will see something like this:

Docker HyperVManager

By default the virtual hard disk of this VM is stored on:

C:\Users\Public\Documents\Hyper-V\Virtual hard disks

This MobyLinux VM has Docker host installed and every Linux Docker container we are going to start up, will run on top of this VM.

Every Windows Docker container however, will run natively on your Windows 10 machine.

Now as you have Docker installed on your local machine, let´s have a look at how Visual Studio is integrated with Docker.

In part 2 I´m going to run through an example application for Docker with Visual Studio 2017 and explain each file, which is created by Visual Studio when adding Docker support.

I´m also going to explain the commands, which are executed by VS, when you start or debug your application within a Docker container.

You can find part 2 here.

Stay tuned and HabbediEhre!

Netherlands biggest interest in Scrum worldwide

How interested are people in Scrum? Is it´s popularity growing or declining? What about the interest in Agile or DevOps?

A few days ago I stumbled upon the website from google, called https://trends.google.com, which tells you what people search for on the Internet using the Google search engine.

It´s amazing how the Internet works today. Google alone currently receives about 60.000 search requests per second! Yes, per second!

I was actually curious to see what people, who are interested in Agile and Scrum, are looking for.

So I clicked a bit around and found 3 interesting results, which I would like to share with you:

“Scrum” most popular in the Netherlands

It turns out that people from the Netherlands searched for “Scrum” the most compared to any other country in the world. I looked at a time range of the previous 12 months (November 2016 to October 2017).

Google Trends: Scrum

The numbers are calculated on a scale from 0 to 100, and represent a search interest relative to the highest point (for the chart in the middle of the picture) or relative to the location with the most popularity for the search term (for the list on the end of the picture).

A higher value means a higher proportion of all queries, not a higher absolute query count. This means, the numbers are relative to the size of a country and therefore you can compare a very small with a very big country easily.

The result is very interesting for me, because I (still) live in the Netherlands and I know that Scrum and Agile is big here.

But I didn´t know that people in the Netherlands are leading the ranking and have the highest interest in Scrum worldwide.

St. Helena and China are (to my surprise) on the second and third place in the ranking. I will revisit both of them in a minute.

If you compare the Dutchies to Switzerland and Germany, which are on the 4. and 5. place in the ranking, then the interest there is just about 50% compared to the Netherlands.

I expected the U.K. and the U.S. somewhere in the top places, but I found U.K on rank 20 and U.S. even on rank 25.

 

Another interesting result:

“Agile software development” most popular in St. Helena

If you are like me, then you might ask yourself: Where the f*ck is St. Helena?

I had to look it up in Google maps as well, it´s a small Island somewhere in the middle of the Atlantic Ocean. It probably sounds familiar to you as well, because this is the Island, where Napoleon was banned after he lost the Battle of Waterloo in 1815.

Anyway, there live around 4500 people on the Island and they are obviously crazy about Agile software development.

If you go there for holiday you probably find agile people all over the place. 🙂

Google Trends: Agile

As you can see in the graph, the popularity of “Agile software development” was quite stable throughout the year, but started to increase since August 2017 (until today November 2017).

So, either Agile software development is really getting more traction since the last 3 months, or google made changes in one of their algorithms.

This is just interesting for me, because the amount of page views on my blog increased as well. It actually follows pretty much the same pattern as above – quite stable throughout the year, and then a linear increase within the last 3 month.

So I thought, maybe the graph in google trends is actually caused by my blog?

Well. I wish 🙂

Anyway, here the last interesting result I found:

“DevOps” is very popular in China

Come on!

China?

That´s weird!

Google Trends: DevOps

If you look at the numbers, then the Chinese interest in DevOps is way ahead. St. Helena again, on second place, has only 55%. And then we have India on the third place with only 40%.

 

But anyway, I think Google trends is a very interesting and powerful tool. And it is really fast.

It gives you insights in what people are searching for on the Internet using the Google search engine.

And the cool thing is, that Google gives you access to this information for free.

So have a look at it and play around. There are quite some nice features to discover.

Btw: if you were wondering why the graph has its minimum always at the same time in all of the three graphs, then I can tell you that people don´t want to know about Scrum, Agile development or DevOps at the end of the year. 🙂

It seems, like everyone is at home celebrating Christmas and New Year, and spending no time searching the Internet for these topics.

But that´s happening not only for these three topics, but the amount of queries in general goes down during these days.

If you want to look into this further by yourself, here are actually the links to the queries I used in google trends:

Ok, that´s it for today. Stay tuned and HabbediEhre!

The power of habit – executing tasks automatically

A habit is something that you do often and regularly, sometimes without even knowing that you are doing it. The great thing about habits is that you can train yourself to make yourself execute certain tasks regularly without even thinking.

Everyone of us has millions of habits, good ones and bad ones. And they are triggered by certain events in life.

For instance, did you brush your teeth today in the morning? You might not even remember that you did it, because it is a habit. You don’t even have to think about it, it is just part of your morning routine and triggered automatically when you wake up.

You almost need no will power to execute a habit

The great thing about a habit is that you almost need no will power to execute it. That’s why they are so powerful.

If you can make a task to a habit, which you know will help you on a long term, then you almost need no will power to consistently execute it.

For example, if you want to learn playing the piano and you make practicing it to a habit, then you don’t need any will power to get yourself in front of the piano to practice.

Or if you want to eat healthier and you make it to a habit, then you don’t have to spend your limited will power every day in overcoming the temptation to not eat chocolate. It will go automatically without even thinking.

But how do you create habits then?

Creating habits is not so easy, but if you do it regularly, then it is getting easier over time.

I was recently listening to the audio book “Superhuman By Habit” by Tynan. I highly recommend this book to everyone, who wants to learn more about habits.

Tynan explains in his book that there are two phases to build habits, the loading phase and the maintenance phase.

The loading phase

In the loading phase you want to teach yourself a new habit. Therefore you have to execute the task every day in the same routine.

For instance, after you come home from work you play the piano for 5 minutes every day.

This requires a lot of discipline and you have to constantly remind yourself to do that. For example, you can set an alarm on your phone or put a sticky note on the piano so you won’t miss it when you come home from work.

In the loading phase it is very important to execute the task always, even though your motivation might be down.

If you will really miss it once, then plan your next day in a way that it ensures you will execute the task. Because if you miss it more then once, then you basically have to start over again with the loading phase.

The maintenance phase

After the loading phase your habit is part of your daily routine and you can switch to maintenance phase.

You have built your habit to an extend that it requires almost no will power anymore to execute the task. It is automated and you don’t even have to think about it anymore.

When to switch to maintenance phase?

In general you cannot predict how long it will take to build a habit. It depends on the person itself and also on what exactly you want to train yourself to become a habit.

To find out, whether you have successfully built your habit or not, you have to ask yourself following question: “If I stop the loading phase today, would something change?”

If you answer the question with “No”, then you have built your habit. If you expect yourself to fall back to the old routine within a few days or weeks, then keep on going with the loading phase.

More flexibility in maintenance phase

According to Tynan you can allow yourself to be a bit more flexible, when you have successfully built your habbit.

For instance, you can train yourself to eat no junkfood at all. During the loading phase you are, of course, not allowed to go to McDonalds for a burger. Such a loading phase might take up to several years to built the habbit.

Afterwards, in maintenance phase, you can still stick to your plan, but be a bit more flexible. You can allow yourself to break the rule every now and then, as long as it is only exceptionial. Let’s say, that you can agree with yourself that you are allowed to eat at McDonalds, but only on Sundays and with your friends.

Drifting off

Over time though, you are going to drift off from your schedule. So you have to pay attention over the coming month and years and take correcting actions when you realize that you have drifted too much.

Benefits of good habits

As I already mentioned, the awesome thing about habits is that they don’t need any willpower to be executed. You are executing them automatically without putting a lot of thought in it.

It is of course quite some effort to built good habits. But even if the loading phase will take several years, you will have this habit most likely for the rest of your life. So investing a few years is still only a little effort compared to what you gain for the several decades afterwards.

Until now I was only talking about good habits. But you can also have bad habits. You can do things, which are bad for you, but you might not even really notice doing it.

Destroying bad habits is a chapter of it’s own. I am going to talk about it in one of the upcoming posts.

For now, let me know in a comment, for which daily tasks do you want to built habits? Is there anything, which would require a short amount of time per day, but you still are not able to consistently do it?

Ok, that’s it. I wish you a great day. Stay tuned and HabbediEhre!

 

 

Will Power – Discipline can be trained

Will power, or you can also call it discipline, is the ability to control yourself. The interesting thing about will power is that it behaves like a muscle: it needs time to recover after you strained it, but you can train it to get more strength.

On the one hand side you need will power to overcome inner temptation and force yourself not to do things. Such things, which are bad for you, but you are so used to.

And on the other side, you need will power to force yourself to do things, which you know would be good for you.

For instance, you need will power to overcome the temptation to not eat sweets when you are on a diet. And you also need it to push yourself out of the couch and to the gym to get in shape.

It seems that people, who achieve a lot in life have a very high will power, otherwise they would not be able to achieve those things. For instance, athletes train every day, while for “normal” people it is hard to even go to the gym once or twice a week.

Will power is limited

Science found out in multiple experiments, that will power of humans is limited. Similar to muscle power, you have only a limited amount of will power per day.

For example you can lift weights only a certain amount of times until your muscles get tired. Then your muscles need some rest to recover, before you can use them again.

Similarly, you can stretch your will power only to a certain degree, after that you need some time to recover to gain back your strength.

I was recently reading the book “The power of habit” by Charles Duhigg. I highly recommend the book, if you want to know more about habits from a scientific point of view.

Anyway, in the book he mentions an experiment, which has been conducted in the 90s.

The experiment

76 undergraduates participated in the experiment. They were told, that the goal of the experiment was to test taste perception. But his was not true, the real goal was to test will power.

Cookies vs radish eater

Each of the participants was put in a room with a plate of food. Half of the plate was filled with warm, fresh cookies and the other half was filled with radish, the bitter vegetable.

Then the scientist told half of the participants, that they are only allowed to eat the cookies and the other half was only allowed to eat the radish. Then the scientist left the room and the participant was on his own.

Of course, the cookie eater were in heaven and the enjoyed the fresh and sweet cookies.

The participants assigned to the radish were craving for the cookies, but they had to stick to the bitter vegetables. It took them a lot of discipline to resist the warm cookies. Ignoring radish is easy, but ignoring the cookies requires a lot of discipline.

The unsolvable puzzle

After a few minutes the scientist came back to the room, removed the plate and told the participant that they need to wait for few minutes. The participant got a puzzle, which they should solve while they were waiting.

The scientist told the participant, that they could ring the bell, if the need something. Then the scientist left the room.

Actually, the puzzle was the most important part of the experiment. It was not solvable. When you try to solve it you will always fail and it requires a lot of discipline to try it again and again and again.

The cookie eater were very relaxed. They tried to solve the puzzle over and over again. They still had a lot of will power left. Some of the cookie eater spend more than half an hour before they rang the bell.

The radish eater on the other hand, were very frustrated and they rang the bell much earlier. Some of them were angry and told the scientist, that this is a stupid experiment.

After all, the cookie eater rang the bell after 19 minutes on average, while the radish eater rang the bell only after 8 minutes. The cookie eater spent more than the double amount of time to solve the unsolvable puzzle.

This means the radish eater had already consumed a lot of their will power when they were resisting the cookies.

Will power can be trained

There are a number of conducted experiments showing that will power can be trained. If you want to know more, than read the book, which I already mentioned above: “The power of habit” by Charles Duhigg.

You can train your will power, like you can do with your muscles. Training every day makes your muscles stronger and you are able to lift more weights.

It is the same with will power, training it increases the strength and the maximum of your daily will power increases.

Don’t rely on will power

I thought that successful people have a lot of will power, or discipline. Otherwise how would they be able to achieve all those things?

To my surprise it is not will power, that drives successful people. Will power is important, but the key here is to build habits.

Building a habit requires discipline, but when you have built one, then executing it requires almost no discipline at all.

For example, when you were a kid, then your parents probably always had to remind you to brush your teeth, before you go to bed. It required a lot of discipline by your parents to push you and build this habit for you.

Nowadays you probably don’t even think about that you have to brush your teeth before you go to bed. It just happens automatically and doesn’t require a lot of discipline. The habit has been build, executing it is easy.

Anyway, the whole topic about habits is a very big and interesting one. Therefore I will dedicate one of the upcoming blog posts to that topic.

Ok, that’s it for this week

The next time you come home from work with the plan to go running, but you don’t have any motivation at all, then remember that will power and discipline can be trained like a muscle 🙂

Stay tuned and have a great week. HabbediEhre!

Nexus – the scaling Scrum framework

Nexus is a framework, which builds on top of the scrum framework and is designed for scaling. It focuses on solving cross-team dependencies and integration issues.

What is Nexus?

The Nexus framework has been created by Ken Schwaber, co-creator of the Scrum framework.

Similar to the scrum guide, there is also the Nexus guide, which contains the body of knowledge for the framework.

It has been released by scrum.org in August 2015.

You can find the definition of Nexus in the Nexus guide as follows:

Nexus is a framework consisting of roles, events, artifacts, and techniques that bind and weave together the work of approximately three to nine Scrum Teams working on a single Product Backlog to build an Integrated Increment that meets a goal.

The Nexus framework is a foundation to plan, launch, scale and manage large product and software development initiatives.

It is for organizations to use when multiple Scrum Teams are working on one product as it allows the teams to unify as one larger unit, a Nexus.

Scrum vs Nexus

Nexus is an exoskeleton that rests on top of multiple Scrum Teams when they are combined to create an Integrated Increment.

Nexus is consistent with Scrum and its parts will be familiar to those who have worked on Scrum projects.

The difference is that more attention is paid to dependencies and interoperation between Scrum Teams.

It delivers one “Done” Integrated Increment at least every Sprint.

New Role “Nexus integration team”

The guide defines a new role, the nexus integration team.

It is a Scrum team, which takes ownership of any integration issues.

The Nexus integration team is accountable for an integrated increment that is produced at least every Sprint.

If necessary, members of the nexus integration team may also work on other Scrum Teams in that Nexus, but priority must be given to the work for the Nexus integration team.

Event “Refinement”

In Nexus the refinement meeting is formalized as a separate scrum event.

In the cross-team refinement event Product Backlog items are decomposed into enough detail in order to understand which teams might deliver them.

After that dependencies are identified and visualized across teams and Sprints.

The Scrum teams use this information to order their work to minimize cross-team dependencies.

Event “Nexus Sprint Planning”

The purpose of nexus Sprint Planning is to coordinate the activities of all Scrum Teams in a Nexus for a single Sprint.

Appropriate representatives from each Scrum team participate and make adjustments to the ordering of the work as created during Refinement events.

Then a Nexus Sprint Goal is defined, an objective that all Scrum Teams in the Nexus work on to achieve during the Sprint.

After that the representatives join their individual Scrum teams to do their individual team Sprint Planning.

Event “Nexus Daily Scrum”

The Nexus Daily Scrum is an event for appropriate representatives from individal Scrum Teams to inspect the current state of the Integrated increment.

During the Nexus Daily Scrum the Nexus Sprint Backlog should be used to visualize and manage current dependencies.

The work identified during that event is then taken back to individual Teams for planning inside their Daily Scrum events.

Wrap up

This is a very highlevel overview about Nexus, you can find more information in the Nexus guide or in this nice introduction.

I am practicing Scrum for quite a while now, but I have never heard about Nexus before. Only last week I stumpled upon it on the Internet.

Therefore I also don’t know anybody who is actively using the Nexus framework in their daily work.

But from this point of view it looks like Ken Schwaber did a very good job again when defining this framework.

I hope that I will have the chance some time to work with Nexus in real life. Of course, for that you need to have the environment where it would make sense to give the Nexus framework a try.

Ok, that’s it for this week. Have a good day and HabbediEhre!

Scrum values – new section in scrum guide

The scrum guide, the official description of the scrum working methodology, has been recently extended with a new section: the scrum values.

Before we dive deeper in what the scrum values are about and what they mean, I want to give you a quick overview on the history of scrum, and especially about the history of the scrum guide.

Then it should be easier to understand the big picture, when and why the scrum guide has been created and updated over the years.

History of scrum

Scrum is now about 21 years old.

Jeff Sutherland and Ken Schwaber, the fathers of scrum, presented their paper “SCRUM Software Development Process” the first time in 1995 at the Oopsla conference in Austin, Texas.

But the name scrum was not their invention. They inherited the name from the paper The New New Product Development Game published by Takeuchi and Nonaka a few years earlier.

Scrum is used to solve complex problems. It uses an empirical approach and solves problems based on experience and facts.

Until today scrum has been adopted by a vast amount of software development companies around the world. But scrum has also been successfully applied in other domains, for instance manufacturing, marketing, operations and education.

History of scrum guide

Jeff and Ken published the first version of the scrum guide in 2010, which is 15! years after their first presentation of scrum at the conference. I don’t know why it took them so long nor what has been used by the community before that.

Jeff and Ken made some incremental updates to the scrum guide in 2011 and 2013. Together they established the globally recognized body of knowledge of scrum, as we know it today.

Recently, in July 2016, a new section has been added to the scrum guide: the scrum values.

The scrum values

Successful use of Scrum depends on people becoming more proficient in living following five values: commitment, courage, focus, openness, respect.

Let’s have a look at each of those values including the one sentence of description from the scrum guide.

Commitment

People personally commit to achieving the goals of the Scrum Team.

I am personally very happy that this value has been added to the scrum guide and is now officially recognized as an important value for a scrum team.

It is a challenge to get the team to commit to the goal of a sprint, but it is a cruicial part in making the team successful.

Courage

The Scrum Team members have courage to do the right thing and work on tough problems.

People should stand up for their beliefs and for what they think is important to put the team to the next level.

People in a scrum team should not be afraid of though problems. They face even the thoughest problems and try to solve them together.

Focus

Everyone focuses on the work of the Sprint and the goals of the Scrum Team.

People are not picking up work outside of the committed sprint. They focus on the work, which has been agreed upon in the sprint planning.

Everybody works on what is good for the team, not what is good for themselves. The goals of the team have a higher priority than the personal goals.

Openness

The Scrum Team and its stakeholders agree to be open about all the work and the challenges with performing the work.

There is no hiding of facts or not telling the whole truth about what is going on. There are no political games played.

Everybody is honest about the problems they face and for instance explains why things are delayed to give the stakeholder insights of what is happening within the team.

Openness in the team as well as openness to the stakeholders builds trust and a better working environment.

Respect

Scrum Team members respect each other to be capable, independent people.

Respect is one of the key elements of building a good culture within the team and in the whole organization.

Treat people as you want to be treated!

Why has the scrum guide been extended?

How were those changes in the scrum guide implemented?

Ken and Jeff built a community around the scrum guide and they are running an online platform, called users voice, where people can make suggestions for changes in the scrum guide.

Other people can vote for those suggestions, but the final decision for making changes in the scrum guide is still taken by Ken and Jeff.

Wrap up

Today scrum is recognized as the most applied framework for agile software development.

The scrum guide plays a very big role in the success of scrum, because it is the first reference point for people wanting to learn more about it.

The newly added section containing the scrum values are a very welcome idea to improve the scrum framework even further.

I will definitely present those scrum values to my team to make them think about this topic and get a discussion rolling. Let me know, if you did the same and share your experience in a comment.

That’s it for this week. Stay tuned and HabbediEhre!

Acceptance criteria – an easy way of defining scope

Acceptance criteria are a straight-forward way of describing, what needs to be in place before a task can be marked as done.

You might have experienced the following situation: you are in a refinement meeting and you just finished discussing a certain task. Now the team is about to estimate the effort of the task using planning poker: The poker cards for estimation show values between 3 and 13 story points!

So some people in the team think the task is more than four times as much effort to implement than other team members.

Discussing the estimation difference the team realizes, that team members had a completely different scope of the task in their head. That’s why there were such big differences in the estimation.

Unclear scope of task

I have been in many discussions, where people talk about what is the scope of a certain task. Although the description of the task is long and detailed, it is not clear what exactly needs to be delivered as part of the task.

This is especially uncomfortable, when the discussion is started during the sprint by the person, who is working on the task. Is this also part of the task?

When somebody creates a new task in the backlog, then this person has his own view on the topic. Sometimes the description is just 1 sentence and sometimes it is a whole page. Coming up with the right amount of description is not easy.

Problem of task description too short

When creating a task some people try to keep the description of the task as short as possible. They think that only the members of the team have to understand the scope of the task. And as the team will discuss the scope of the task in a refinement meeting, the details will be talked through anyway. So there is no need to have a detailed description, right?

Wrong!

Not all people are always present in those meetings, team members might be on holiday or are just not paying attention. Or it is also completely normal that people might forget about some details of scope discussions.

Therefore writing down the most important things in the task description is clearly a must for a proper backlog item.

Problem of task description too long

Then there are some people, including myself, who tend to write too long descriptions of tasks. The idea is to make the scope of the task understandable to everybody, even for non-technical people. This results in a long text, explaining the purpose, dependencies to other teams, things, which are out-of-scope, etc.

The problem is, that it is not clear what is part of the task and what is just there for clarification. Different people might interpret the description differently, because they have different backgrounds. And some people might not even read the description, because it is too long.

Finding the right balance of clear-enough description versus too-detailed description is not simple.

Adding acceptance criteria to a task

On top of having a title and a description, you can also add acceptance criteria to a task.

Acceptance criteria is a list of conditions, that a software must satisfy to be accepted by the stakeholders.

They define what a software should do, without specifying implementation details. So they don’t state how the software should do it, but only what the software should do.

Acceptance criteria should be relatively high-level while still providing enough detail to be useful.

They should include functional criteria, non-functional criteria and performance criteria.

Functional criteria

Functional criteria define how the software should work. It define the business processes in a software. For instance “the user can search servers by brand and type“.

One format for defining functional criteria is the Given/When/Then format:

Given some precondition When I do some action Then I expect some result.

Non-Functional criteria

Non-functional criteria define conditions for non-functional requirements. For instance, “the search button complies with the design of the search button on the front page“.

Performance criteria

In case performance is critical, then adding criteria defining performance thresholds make sense. For instance, you can add requirements for the maximum response time of a certain API call.

Benefits of acceptance criteria

Acceptance criteria make it clear in just a simple and usually short list of conditions, what should be done as part of the task. Therefore they are very helpful for the team to understand the scope of a task.

You can see the benefits of acceptance criteria during refinement meetings. Everybody is on the same page, when it comes to the estimation of the task.

Next to that, acceptance criteria are also very helpful for the tester. They make the job of the tester a bit easier, because he/she has a starting point on what needs to be tested.

Disadvantage of acceptance criteria

The downside of acceptance criteria is that everyone might rely on that list made by the creator of the task, without rethinking if the list is correct or complete.

If you don’t have acceptance criteria yet, then just give it a try for a few sprints and see how it goes.
In my experience it helped the team to make tasks much more clear, with just a little bit of more effort during the creation of the task.

Ok, that’s it for today.

I’m curious if you define acceptance criteria for each task and whether you find them helpful or just overhead. Let me know in a comment!

Stay tuned and until next week. HabbediEhre!