The grand finale

After two semesters of work, this project comes to an end. Unfortunately we were not able to achieve world domination. But that’s ok, we learned a lot trying…

Jokes aside, this project helped us to gain a lot of knowledge and experience in the used frameworks and systems. Unfortunatley we were not able to implement everything we wished for but this experience will help us with the next project.

So for the final we hereby present to you our stuff:


You can find the slides and demo video of our final here.

The slides for our Midterm here.

Project Management

As you can see in one of our blogposts did we utilize function points to calculate an estimation of the needed time to implement the new use cases in the second half of our project. These estimations were unfortunately a bit too optimistic and we were not able to accomplish everything we wanted. In the following you can see which use cases we were able to finish and how much time we spent on each use case in comparison to the estimation.

As you can see our estimations were only for one use case right and we didn’t have the chance to work on two at all. The yellow use case marks, that we were able to complete this but needed much more time than expected. We were not able to complete the red use case but also spent much time on this one.
We think one reason for our miscalculation is that we are unexperienced with using function points as a metric and probably did a few mistakes in our calculations. On the other hand underestimated we generally the complexity and work that had to be done to fulfuill these use cases.

Here is also a chart that breaks down how much time every one of us spent on the project in general. The time is measured in minutes.


  • Automated Testing
    We already set the foundation for automated testing, but did not implement it in our daily business yet. You can find further information on our fifth blogpost about our feature files.
  • Automated Deployment
    Our whole deployment is already automated. Further information on this topic can be find in our SAD.

Blog posts

We hope you guys also had fun working on your projects and see you next semester!

Installation of our services

Today we want to show you how you can install our systems on your own machine.

Prerequisites: docker, android phone

Step 1: setup api and webservice

Note: we are storing all container data under /docker/*service* you may edit this path to your liking.

Api webservice and database will need to share a network:
docker network create web

For the api you will need a postgres database, we are using the following docker-compose file:

In pgadmin create the role “api” with all rights and create a new database called ‘maphynn’ and initialize it with this sql script.

Then you have to login to our registry:
docker login

The api stores some config in a .env file with the following content:


The JWT_TOKEN_SECRET is a password that is used to encrypt the JWT that is responsible for verifying requests. You can use any password you like, but we recommend a minimum length of 16 characters.

The BACKDOOR_HASH_KEY is a Bcrypt Hash which you can create here, just enter any password for Encrypt and leave the rounds at 12. Enter the hashed key as BACKDOOR_HASH_KEY. This password is used for secure communication between webservices and the api.

And run the image:

docker run --restart unless-stopped -d --name maphynn_api --net=web -v /docker/.env:/usr/src/app/.env -v /docker/api/images/profile_pictures:/usr/src/app/static/images/profile_pictures -p 8083:1234

The webservice also has a .env:

API_ADDRESS=your adress

In the LOCAL_KEY, enter the clear password which you have encrypted with Bcrypt.

and can be started with this command:

docker run --restart unless-stopped -d --name maphynn_webservice --net=web -v /docker/.webenv:/usr/src/app/.env -p 8082:1234

Now we can start the website:

docker run --restart unless-stopped -d --name maphynn_website -p 8081:80

To run the App go to and download the latest development.apk and install in on an Android phone. You have to allow installation from an unknown source first.

Metrics (e.g. a bunch of numbers and graphs that look cool (hopefully))

Ladies and gentleman, today I want to introduce you to our metrics system. We are using Sonarqube to analyse where our code lacks attention. Sonarqube is quickly set up inside a docker container (maphynn_sonarqube) and provides support for several languages. As such, typescript wich is used by our backend. But plugins can also provide metrics for other languages. We used this plugin for our frontend which uses flutter and dart. The sonar-scanners are run during the ci/ce pipline on gitlab for the dev branches. The analysis of the website is currently not working because we do not have any unittests and the sonar-scanner expects test and coverage outputs.

Now lets take a look at how metrics improved our code. I will take the app for an example. First there were approx. 900 code smells ?! What in the world have I done? … well maybe it isn’t entirely my fault… so I think the plugin for dart enables EVERY dartanalyzer rule. Some of the even are contradictory -> Use final modifier for variables that are only assigned once. OK did that, but hey don’t do it for variabels in methods… but you told me to … anyway. It also desn’t like JSON strings no idea why… Anyway after a bit of work we are down to 400ish. And I will from now on use final and write some comments.

Decreasing number of code smells (and bugs).
The use of the modifier final was probably the biggest concern

The following is an example where I decided not to change anything:

I don’t see why I wouldn’t want do that. Sure the name of the variable implies the type but it also doesn’t hurt.

For the Api there were security issues which turned out to be the ‘hardcoded’ passwords for the api documentation so nothing to worry about.

So this is our metrics system. Hope you find this interesting and see you guys next week!

P.S. if anybody also uses sonarqube and has sucessfully activated the authentification using their gitlab, please tell me how. Somehow gitlab appends a “/gitlab” to the end of the redirecion url and it doesn’t work.

P.S.P.S. I think it might be better to set up metrics earlier when you have less code. It might be a lot of set-up in the beginning, but you won’t be overrun by things you did wrong the last year.

Design Patterns

The topic of this weeks blogpost are Design Patterns.
The definition at the beginning of the wikipedia article states following:

In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. Rather, it is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.

We did research about different design patterns, like the factory and the observer pattern. But we could not really find a point in our source code where we could apply one of these more common design patterns in a meaningful way.
The reason for this is that we are taking big advantage of the functionalities that our frameworks are providing us. In the case of our frontend this would be Flutter and Angular Dart and in the backend things like Socket IO and Sequelize. This has the result that we are only writing little pieces of independent code. It was difficult to find a Design Pattern with which we could improve our code.

The mentioned frameworks are already providing us with different design patterns, which we didn’t have to implement on our own. The first example for this is the observer pattern, which is provided by Socket IO. The observer pattern is pretty simple and basically consists of two different parts, observers and subjects. The observer subscribes to different subjects of which he wants to gets updates, when changes happen. One observer can subscribe to several different subjects and every subject can have many observers. In our case do we only have one subject which is our webservice on the server. Every client who usese our service ans is logged in subscribes to this subject.

In the end we chose to implement the module pattern in our API-code, in a simplified way. We found that we were using the same function call for Sequelize at different points in our code. To increase the maintainability of our code we moved it into a separate function. Because of this we only had to change the call at this one point, when necessary. For now this seems to be only a minor problem because our project is still pretty easy managable at this size. But as our codebase grows, little things like these are getting more important.

In the following you can see the difference that this change caused in our class diagram. The difference is pretty small, because we only did a minor change to the code.

Old version on the left and new version on the right.

And here you can see the changes we did in our code:

Old code
New code

Best regards,


Hello everybody,

this week’s task was to refactor a deliberately badly written code. To improve the code, we added JUnit tests and restructured the code using the IDE tools of the IDE we prefer.

These are the links to our Git repositories:

Below you will find a little note from each of us, how their IDE helped us to do the Refactoring.

Tim and Marvin:
We have used IntelliJ to do the refactoring and could automate nearly every step of it which spared time and left less room for failure, which we didn’t expect to this extend beforehand.

I used eclipse to refactor since i was already working with it. It helped a lot with the refactoring and even though it sounds very subtle, auto-formatting helps a lot.

In my case I have tried refactoring in Visual Studio code, as I do a lot of projects using this IDE. VSC offers a very wide range of functions for refactoring and if that’s not enough for you, you can choose from countless plugins. For more information about refactoring in VSC just click here.

Greetings from your Maphynn Team !

Function Points

The topic of the Blogpost for this week is the calculation of Function Points for past and future use cases.
Function points are a measurement unit for software components. It is possible to evaluate the complexity of a chunk of software with them. If you want to learn more details about function points and how to calculate them then we can recommend the following YouTube channel:

Here you can see an example of our calculations:

You can find these calculations in every use case document of our project.
Here is a link to the example above:

All our calculations are also collected in the following Google sheet:

But what do you do after collecting all this data? We used the function points to improve our time estimations for future use cases. To accomplish this we created the following graph based on the table below:

Time spent per Function Point
Function Points table

Detailed view:

Based on the use cases from last semester we estimated a trendline that correlates the calculated function points with the spent time. Based on that we can now approximate the time needed for future use cases. As you can see some of the blue points are very far away from the trendline, like the use cases “Login” (Log), “Register” (Reg). We needed more time to complete these use cases because they were the ones we started to work on first. At this time we had zero to no experience in working with these technologies. Thats why we needed longer to complete them. “Add/Remove Friend” (ARF) on the other hand took less time, relative to the other use cases. This was the case because we were able to copy some things from past use cases and didn’t have to figure everything out from scratch. One of theses things was, how to send an http-request in Dart and how to work with the received answer, for example.

With the estimated time and the data from the last semester we are now able to estimate the time we have to invest in our project this semester, if we want to fulfill our goals we set.
You can see our spent time per workflow in the picture below. In an earlier blogpost we explained what these workflows are. The yellow area is our spent time on implementing our practical use cases. We marked the end of the first semester with the black arrow. There you can see that we spent around 40% of our time on implementing features. Because we also did much setup work, which is now already done, we expect to spend 50% of our time implementing in the future.

Here is also a link to our chart in YouTrack.

Thanks for reading this blogpost. We would really appreciate if we would get feedback from you.

Based on that we get an estimate of about 60 workhours needed for implementation and if we double this do we get an estimation of 120h that we will have to invest. This seems to be a managable amount and we will see if this is how the reality looks like.

With best regards,
the MAPHYNN team

Risk Management

Hello everybody,

this week’s topic is risk management, we have listed several realistic scenarios that pose a risk.These scenarios are then sorted by a risk factor, which is calculated from the probability of occurrence and their impact. In order to avoid these risks, avoidance strategies were drawn up and team members were assigned to monitor the individual risks. We are looking forward to your feedback.

Risk Management Table

Greetings from your Maphynn Team !

Hello there!

Welcome back for semester two. Team MAPHYNN is back and ready to tackle the oncoming months of work despite the corona virus. So speaking of the team, since our friend Niklas left the team (F) Team MAPHYNN are now:

  • Tim: website
  • Felix: backend -> matchmaking
  • Clemens: backend -> api
  • Marvin: app

Last semester we created a basic website and app with working login and friendlist. For this semester we plan to implement a chat mechanic and the heart of our project: the matchmaking algorithm.

Our goals are therefore defined by these use cases marked in orange:

Midterm Presentation

Hello, everybody,

in this blog post we want to summarize all the work we have done so far. Below is a list of the weeks we have worked on Maphynn so far. Each blog post is linked below, which contains all useful information about the topic. We have also linked some of the documents and files with requirement specifications directly from this post. Have fun reading and leave us a comment if you like!


Project Management


  • Automated Testing
    We already set the foundation for automated testing, but did not implement it in our daily business yet. You can find further information on our fifth blogpost about our feature files.
  • Automated Deployment
    Our whole deployment is already automated. Further information on this topic can be find in our SAD.

Blog posts

We wish you all a Merry Christmas and a Happy New Year.

Greetings from your Maphynn Team