Mockito – an introduction into mocking frameworks

The following tutorial should give an overview about the functionality of mocking frameworks and why you should use them to write better tests. The popular mocking framework for Java, Mockito, will be used as a practical example.

What is Mocking?

Mocking in general describes the copying of something else behaviour. In the context of object-oriented programming languages do you want to fake the behaviour of an object without having to create a real instance of the object. But why should you want to do something like this?

Good code should also be well tested. Many of those tests will be unit tests, which should verify the behaviour of a single component, without relying on dependencies to other objects. The following picture illustrates this well.

Source: https://[email protected]/what-is-mocking-in-testing-d4b0f2dbe20a

By mocking the needed objects are you able to isolate the component that you want to test. Now it is easier to locate a possible error, because only the test of the broken component will fail if something goes wrong.

There are two different methods of mocking, proxy-based mocking and class loader remapping. Mockito uses proxy-based mocking.

Why Mockito?

I chose to work with Mockito for this tutorial because it is the most used mocking framework for Java and one of the most used Java libraries in general. It has many advanced features, but it is also easy to write good tests with just the basic tools. When you encounter a problem, then the possibility that someone already solved it or something similar, is extremely high. Also, the general concept of working with proxy-based mocking should be transferable to any other mocking framework of your choice.

Mockito logo

How to use Mockito

To use Mockito do you have to include the Mockito library and a testing framework into your project. I will use Junit 5 as my testing framework. For an easy setup did I use Maven. You can find the necessary Maven packages with the following addresses:

  • org.mockito:mockito-core:2.2.2
  • org.junit.jupiter:junit-jupiter:5.5.2

Then all you need is to make the following imports at the beginning of your test class, and you are ready to write unit tests with mocks.

Necessary imports

Basic functionalities

In the following part I will explain some of the basic functionalities of Mockito. For better visualization did I write a little test project where I showcase the explained functionalities. You can find the source code in the following git repository:

It contains a simple library and book class. The different tests should cover the functions of the library class and I will always mock the book class to give an example how you can work with Mockito. When writing about a certain feature, I will be referring to the test class where it is implemented.

Create a mock an set its behaviour

Let us begin with the most important part. How do I create a mock of the needed class? This part is very simple. You just call the static method mock(MyClass.class) to create a new mock object. Now you must define the behaviour of your mock object when certain functions are called on the object.

This can be realized with when(mock.doSomething()).thenReturn(xyz) and is called stubbing. With this you can create the necessary environment that is needed for your test to work. You can find this in every test class, but in its most basic form can you see it in the class
Now you are already able to get your test working without dependencies to other classes than the on that should be tested. But we are still not verifying if the behaviour of our class is as intended. First you can use the utilities of JUnit in form of the different Assert-functions to verify the output of the tested methods. But Mockito also provides you with some extra tools, so you can write even better tests.

Verification with Mockito

You can verify if the functions of a mock object have even been called within your test. This can be helpful to see if the code is executed as intended and does not only somehow gives the correct results back. For this you can use the verify() function. You can also specify how often a function should have been called, either with an exact number or a minimum or maximum. In the class can you find a simple example for the use of the verify() function.

In this class can you also find an example for the InOrder feature. You have to create a new InOrder() object with the mock objects that have to be verified as arguments to make use of this. Then you can call the verify() function on the InOrder() object to check if the functions have been called in the same order as you verify them. This can be helpful in multithreading scenarios when certain functions must be called before the rest. The given example is not very practical but shows how to use the functions.

Alter the behaviour of the mock while testing

Sometimes you also want to test your code under different scenarios. Then you must change the behaviour of your mock multiple times. In theory you can do this by stubbing the needed function again with a different return value after the first use. In big tests this can become very tedious and blows the code up. That’s why there are different options to control the behaviour of your mock for different calls.

The most simple variant is the reset() function. When you call this on a mock, all stubbing will be reset to the default value. You can find a simple example for this in the test class in the function testAvgChapterPerBook().

A little bit more advanced is the usage of consecutive stubbing. With this feature do you define the return value for every call at once. You can either do this by chaining doReturn() statements or by giving several arguments. They will then be executed in the defined order. An example can be found in the class Consecutive stubbing can be helpful when you want to test different scenarios in one test. You then have a clear separation of the test code and the definition of you mock object.

Partial mocking

At the end I want to give a brief introduction into the topic of partial mocking of real objects. The basic idea is that you use a real instance of the needed object for testing and only stub selected functionalities, where necessary. In the past this was considered a bad practice and a code smell because the reason you are using mocks is just to avoid using the real object. But there are several use cases where this can be necessary, like when working with legacy code or third-party interfaces.

When you want to utilize this feature you can use the spy() method to create your mock object. You need a real instance of the object to mock as an argument. Then you can use your mock instance as usual. The difference is now that when you are not stubbing a function then the function of the real object will be called instead of a default null value. An example for this can be found in the class and the function testWithSpy().

Thank you for reading this little tutorial about Mockito. I hope it could help you to get a first impression on how to work with mocks. Every feedback is appreciated.

Best regards,

Tim Leistner

Design Patterns

The topic of this weeks blogpost are Design Patterns.
The definition at the beginning of the wikipedia article states following:

In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. Rather, it is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.

We did research about different design patterns, like the factory and the observer pattern. But we could not really find a point in our source code where we could apply one of these more common design patterns in a meaningful way.
The reason for this is that we are taking big advantage of the functionalities that our frameworks are providing us. In the case of our frontend this would be Flutter and Angular Dart and in the backend things like Socket IO and Sequelize. This has the result that we are only writing little pieces of independent code. It was difficult to find a Design Pattern with which we could improve our code.

The mentioned frameworks are already providing us with different design patterns, which we didn’t have to implement on our own. The first example for this is the observer pattern, which is provided by Socket IO. The observer pattern is pretty simple and basically consists of two different parts, observers and subjects. The observer subscribes to different subjects of which he wants to gets updates, when changes happen. One observer can subscribe to several different subjects and every subject can have many observers. In our case do we only have one subject which is our webservice on the server. Every client who usese our service ans is logged in subscribes to this subject.

In the end we chose to implement the module pattern in our API-code, in a simplified way. We found that we were using the same function call for Sequelize at different points in our code. To increase the maintainability of our code we moved it into a separate function. Because of this we only had to change the call at this one point, when necessary. For now this seems to be only a minor problem because our project is still pretty easy managable at this size. But as our codebase grows, little things like these are getting more important.

In the following you can see the difference that this change caused in our class diagram. The difference is pretty small, because we only did a minor change to the code.

Old version on the left and new version on the right.

And here you can see the changes we did in our code:

Old code
New code

Best regards,

Function Points

The topic of the Blogpost for this week is the calculation of Function Points for past and future use cases.
Function points are a measurement unit for software components. It is possible to evaluate the complexity of a chunk of software with them. If you want to learn more details about function points and how to calculate them then we can recommend the following YouTube channel:

Here you can see an example of our calculations:

You can find these calculations in every use case document of our project.
Here is a link to the example above:

All our calculations are also collected in the following Google sheet:

But what do you do after collecting all this data? We used the function points to improve our time estimations for future use cases. To accomplish this we created the following graph based on the table below:

Time spent per Function Point
Function Points table

Detailed view:

Based on the use cases from last semester we estimated a trendline that correlates the calculated function points with the spent time. Based on that we can now approximate the time needed for future use cases. As you can see some of the blue points are very far away from the trendline, like the use cases “Login” (Log), “Register” (Reg). We needed more time to complete these use cases because they were the ones we started to work on first. At this time we had zero to no experience in working with these technologies. Thats why we needed longer to complete them. “Add/Remove Friend” (ARF) on the other hand took less time, relative to the other use cases. This was the case because we were able to copy some things from past use cases and didn’t have to figure everything out from scratch. One of theses things was, how to send an http-request in Dart and how to work with the received answer, for example.

With the estimated time and the data from the last semester we are now able to estimate the time we have to invest in our project this semester, if we want to fulfill our goals we set.
You can see our spent time per workflow in the picture below. In an earlier blogpost we explained what these workflows are. The yellow area is our spent time on implementing our practical use cases. We marked the end of the first semester with the black arrow. There you can see that we spent around 40% of our time on implementing features. Because we also did much setup work, which is now already done, we expect to spend 50% of our time implementing in the future.

Here is also a link to our chart in YouTrack.

Thanks for reading this blogpost. We would really appreciate if we would get feedback from you.

Based on that we get an estimate of about 60 workhours needed for implementation and if we double this do we get an estimation of 120h that we will have to invest. This seems to be a managable amount and we will see if this is how the reality looks like.

With best regards,
the MAPHYNN team


This weeks retrospektive, for us as a team, was very prokductive and informing. We found quite a few points that cought us a bit of guard, giving us some good starting points to further improve the teamwork on our wonderful project. We were happy with the meeting overall and are going to continue on doing retrospectives in the futur.

In more detail:
We constructed a clear plan to improve on the points we discussed in the meeting, sorting he steps by importance. Everyone participated and contributed to the succes of this meeting.
However, the only real thing keeping us from maximum efficiency at our retrospective was the time management. It took a long time to write down some simple bulletpoints, just because we didn’t understand the the goal of this part (potential). We think those steps could be done a lot quicker or even prepared beforhand. Overall this
should be sorted in the next few weeks when a retrospective is something “normal” in our sprint.

These are the Flipcharts generated by this retrospective:

Project management tool

The topic of this weeks blogpost is our project management tool and its connection with our other tools.
We are using YouTrack as our project management tool, because we are also using IntelliJ as our IDE and the connection of both is very easy. YouTrack is a very powerful tool with which you are able to manage big projects. Because of this it is also very complex and we are still learning much on how to utilize this for our project.

Until now we set up a agile board where we are organizing our issues. We only just recently began to use SCRUM and sprints in our project. Because of this our first sprint contains the work items of the first six weeks of our project. From now on the duration of our sprints will be one week and we will meet atleast weekly to discuss what tasks we have to do in the next sprint.

It was really easy to connect IntelliJ with YouTrack. You just have to install a plugin and connect to your Youtrack server. As you can see in the picture below we can see the assigned tasks for us directly in our IDE and are also able to automatically track our spent time developing as you can see on the right. This is very useful because it is often very unprecise when you have to track the time by yourself.

We also connected Youtrack with our Gitlab server. On the one hand we are able to just link an issue to a task in YouTrack, on the other hand it is also possible to let Youtrack parse comments for tasks directly out of our commit comments on Gitlab. You can find an example for this in the issue linked below.

Our agile board was already mentioned at the beginning of the blogpost. When you follow the next link you can find our agile board with our current sprint for week seven. As you can see there are our issues tagged to a member of our team and a subsystem of our project, aswell as to the fitting workflow and phase of the RUP model. There is also our estimated and spent time for each task displayed.

At last but not least you can find a burndown and a gantt chart, which we generated with our issues from the first sprint, under the links below.

Week 4: Use Cases

The next step in our project is to define exactly which functions we are going to implement in our system.

For the beginning we have described two use cases. Both are about changing information of the user profiles. One describes the “change of critical information” which includes the email address and the password of the user. The second one describes the customization of the user profile regarding their profile picture and the nickname which is displayed when interacting with other users.

Both documents are linked in our SRS document and you can also find them through the following links: