The grand finale

After two semesters of work, this project comes to an end. Unfortunately we were not able to achieve world domination. But that’s ok, we learned a lot trying…

Jokes aside, this project helped us to gain a lot of knowledge and experience in the used frameworks and systems. Unfortunatley we were not able to implement everything we wished for but this experience will help us with the next project.

So for the final we hereby present to you our stuff:

Documents

You can find the slides and demo video of our final here.

The slides for our Midterm here.

Project Management

As you can see in one of our blogposts did we utilize function points to calculate an estimation of the needed time to implement the new use cases in the second half of our project. These estimations were unfortunately a bit too optimistic and we were not able to accomplish everything we wanted. In the following you can see which use cases we were able to finish and how much time we spent on each use case in comparison to the estimation.

As you can see our estimations were only for one use case right and we didn’t have the chance to work on two at all. The yellow use case marks, that we were able to complete this but needed much more time than expected. We were not able to complete the red use case but also spent much time on this one.
We think one reason for our miscalculation is that we are unexperienced with using function points as a metric and probably did a few mistakes in our calculations. On the other hand underestimated we generally the complexity and work that had to be done to fulfuill these use cases.

Here is also a chart that breaks down how much time every one of us spent on the project in general. The time is measured in minutes.

Quality

  • Automated Testing
    We already set the foundation for automated testing, but did not implement it in our daily business yet. You can find further information on our fifth blogpost about our feature files.
  • Automated Deployment
    Our whole deployment is already automated. Further information on this topic can be find in our SAD.

Blog posts

We hope you guys also had fun working on your projects and see you next semester!

Plotly dash beginners guide

This is a guide how to get your own simple dashboard running locally. You will need a computer and an internet connection, but given that you are readig this you already have both.

Since dash is a python framework we need to install python first. If you are using Linux it will most likley be already installed (check by typing python in your terminal). Otherwise you can download python here. For Linux you might also have to install pip, you can find instructions here.

Now we have to install dash. For Linux open a terminal and run the command ‘python -m pip install dash pandas’. For Windows open CMD or Powershell and run ‘py -m pip install dash pandas’. Now we can use dash and pandas. Pandas in a library to handle datasources.

Open up the editor or IDE of your choice and a terminal session in your source folder.

First of all we need to import all needed packages:

import dash
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objects as go

from dash.dependencies import Input, Output

import math
import random

import pandas as pd

Now lets get the data from a csv file:

df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2014_apple_stock.csv')

And create the layout of our dashboard:

app = dash.Dash(__name__)

app.layout = html.Div(children=[
    html.H1(children='Hello Dash'),

    html.Div(children='Hello there'),

    dcc.Interval('interval', interval=1000),

    dcc.Graph(id='scatter'),

    dcc.Graph(
        id='line',
        figure=go.Figure(
            go.Scatter(
                x = df['AAPL_x'],
                y = df['AAPL_y'],
                name='Share Prices (in USD)',
            )
        )
    ),
])

Now to make the dashboard a little bit more interesting, lets add a callback:

@app.callback(
    Output(component_id='scatter', component_property='figure'),
    [Input(component_id='interval', component_property='n_intervals'),]
)
def update_scatter(input):
    if(input is None):
        return dash.no_update

    data = go.Scatter(
        x=[random.randint(0, 100) for i in range(0, 100)],
        y=[random.randint(0, 100) for i in range(0, 100)],
        mode='markers',
    )
    layout = dict(
        title='Scatter',
        height=600,
        width=600,
    )

    return go.Figure(data=data, layout=layout)

And finally run it!

if __name__ == '__main__':
    app.run_server(debug=True)

Now switch to your open terminal and run the server:
Linux -> ‘python3 dashboard.py’
WIndows -> ‘py dashboard.py’

You will see some output with information about the server, like the adress where you can reach it.

If you enable the debug optiption, you will see debug information on the dashboard and the code will be ‘interactive’ so when you change something and save the file it will auto restart the server which is super useful. Just make sure there are no errors : )

You can find the example with a bit more happening here.
Head over here to see all the graphs or inputs available.

Now you are ready to tinker around on your own and create beautiful dashboards.

GameMaker Studio 2 – Making games is for everyone!

The following tutorial shows how to clone the demo project used in the presentation. In the presentation we will program the demo project step by step.

What is GameMaker Studio 2?

GameMaker Studio 2 is an IDE that combines the game engine GameMaker and other useful tools for game development, such as a graphics editor or a sound mixer… GameMaker specializes in 2D game development and offers a series of tutorials and demos for different skill levels. The demo includes a short introduction of the most important editors and a project where the basic functions of GameMaker are explained.

Installing the Demo project

Step 1: Install the free version of GameMaker Studio 2 by selecting, downloading and installing the Free Trail version. Note: unfortunately a registration at YoYo Games is necessary!

Step 2: Open GameMaker Studio and log in On the main screen, select the Source Control tab and then click “Clone Repository”.

Step 3: Enter the path to the demo project:
https://github.com/CHP-EmAS/GameMaker_Studio_2_ePortfolio.git
in the first field and select a new folder where the project should be cloned. Afterwards confirm with OK.

Step 4: After successful cloning a window opens where you have to select the demo project. Select the file “Demo.yyp” and confirm with OK.

Step 5: The project is now opened and can be started. To do this, you have to press the “Run” button at the top of the control bar. The game starts in an extra window.

Step 6: Now you can try everything as you like. The graphics are cc and can be processed further.

Thank you for reading this little tutorial about GameMaker Studio 2. I hope it could help you to get a first impression on how to work with GameMaker. Every feedback is appreciated.

Best regards,

Clemens Hübner

Mockito – an introduction into mocking frameworks

The following tutorial should give an overview about the functionality of mocking frameworks and why you should use them to write better tests. The popular mocking framework for Java, Mockito, will be used as a practical example.

What is Mocking?

Mocking in general describes the copying of something else behaviour. In the context of object-oriented programming languages do you want to fake the behaviour of an object without having to create a real instance of the object. But why should you want to do something like this?

Good code should also be well tested. Many of those tests will be unit tests, which should verify the behaviour of a single component, without relying on dependencies to other objects. The following picture illustrates this well.

Source: https://[email protected]/what-is-mocking-in-testing-d4b0f2dbe20a

By mocking the needed objects are you able to isolate the component that you want to test. Now it is easier to locate a possible error, because only the test of the broken component will fail if something goes wrong.

There are two different methods of mocking, proxy-based mocking and class loader remapping. Mockito uses proxy-based mocking.

Why Mockito?

I chose to work with Mockito for this tutorial because it is the most used mocking framework for Java and one of the most used Java libraries in general. It has many advanced features, but it is also easy to write good tests with just the basic tools. When you encounter a problem, then the possibility that someone already solved it or something similar, is extremely high. Also, the general concept of working with proxy-based mocking should be transferable to any other mocking framework of your choice.

Mockito logo

How to use Mockito

To use Mockito do you have to include the Mockito library and a testing framework into your project. I will use Junit 5 as my testing framework. For an easy setup did I use Maven. You can find the necessary Maven packages with the following addresses:

  • org.mockito:mockito-core:2.2.2
  • org.junit.jupiter:junit-jupiter:5.5.2

Then all you need is to make the following imports at the beginning of your test class, and you are ready to write unit tests with mocks.

Necessary imports

Basic functionalities

In the following part I will explain some of the basic functionalities of Mockito. For better visualization did I write a little test project where I showcase the explained functionalities. You can find the source code in the following git repository:
Mockito_ePortfolio

It contains a simple library and book class. The different tests should cover the functions of the library class and I will always mock the book class to give an example how you can work with Mockito. When writing about a certain feature, I will be referring to the test class where it is implemented.

Create a mock an set its behaviour

Let us begin with the most important part. How do I create a mock of the needed class? This part is very simple. You just call the static method mock(MyClass.class) to create a new mock object. Now you must define the behaviour of your mock object when certain functions are called on the object.

This can be realized with when(mock.doSomething()).thenReturn(xyz) and is called stubbing. With this you can create the necessary environment that is needed for your test to work. You can find this in every test class, but in its most basic form can you see it in the class TestSumOfPages.java.
Now you are already able to get your test working without dependencies to other classes than the on that should be tested. But we are still not verifying if the behaviour of our class is as intended. First you can use the utilities of JUnit in form of the different Assert-functions to verify the output of the tested methods. But Mockito also provides you with some extra tools, so you can write even better tests.

Verification with Mockito

You can verify if the functions of a mock object have even been called within your test. This can be helpful to see if the code is executed as intended and does not only somehow gives the correct results back. For this you can use the verify() function. You can also specify how often a function should have been called, either with an exact number or a minimum or maximum. In the class TestAvgPerChapter.java can you find a simple example for the use of the verify() function.

In this class can you also find an example for the InOrder feature. You have to create a new InOrder() object with the mock objects that have to be verified as arguments to make use of this. Then you can call the verify() function on the InOrder() object to check if the functions have been called in the same order as you verify them. This can be helpful in multithreading scenarios when certain functions must be called before the rest. The given example is not very practical but shows how to use the functions.

Alter the behaviour of the mock while testing

Sometimes you also want to test your code under different scenarios. Then you must change the behaviour of your mock multiple times. In theory you can do this by stubbing the needed function again with a different return value after the first use. In big tests this can become very tedious and blows the code up. That’s why there are different options to control the behaviour of your mock for different calls.

The most simple variant is the reset() function. When you call this on a mock, all stubbing will be reset to the default value. You can find a simple example for this in the test class TestAvgChapterPerBook.java in the function testAvgChapterPerBook().

A little bit more advanced is the usage of consecutive stubbing. With this feature do you define the return value for every call at once. You can either do this by chaining doReturn() statements or by giving several arguments. They will then be executed in the defined order. An example can be found in the class TestListOfAuthors.java. Consecutive stubbing can be helpful when you want to test different scenarios in one test. You then have a clear separation of the test code and the definition of you mock object.

Partial mocking

At the end I want to give a brief introduction into the topic of partial mocking of real objects. The basic idea is that you use a real instance of the needed object for testing and only stub selected functionalities, where necessary. In the past this was considered a bad practice and a code smell because the reason you are using mocks is just to avoid using the real object. But there are several use cases where this can be necessary, like when working with legacy code or third-party interfaces.

When you want to utilize this feature you can use the spy() method to create your mock object. You need a real instance of the object to mock as an argument. Then you can use your mock instance as usual. The difference is now that when you are not stubbing a function then the function of the real object will be called instead of a default null value. An example for this can be found in the class TestAvgChapterPerBook.java and the function testWithSpy().

Thank you for reading this little tutorial about Mockito. I hope it could help you to get a first impression on how to work with mocks. Every feedback is appreciated.

Best regards,

Tim Leistner

Installation of our services

Today we want to show you how you can install our systems on your own machine.

Prerequisites: docker, android phone

Step 1: setup api and webservice

Note: we are storing all container data under /docker/*service* you may edit this path to your liking.

Api webservice and database will need to share a network:
docker network create web

For the api you will need a postgres database, we are using the following docker-compose file: https://gitlab.maphynn.de/snippets/1

In pgadmin create the role “api” with all rights and create a new database called ‘maphynn’ and initialize it with this sql script.

Then you have to login to our registry:
docker login https://registry.maphynn.de

The api stores some config in a .env file with the following content:

PGHOST=postgres
PGUSER=api
PGDATABASE=maphynn
PGPASSWORD=PASSWD
PGPORT=5432
JWT_TOKEN_SECRET=WOW
APP_NAME=Maphynn
APP_VERSION=0.0.1
PORT=1234
BACKDOOR_HASH_KEY=YAY

The JWT_TOKEN_SECRET is a password that is used to encrypt the JWT that is responsible for verifying requests. You can use any password you like, but we recommend a minimum length of 16 characters.

The BACKDOOR_HASH_KEY is a Bcrypt Hash which you can create here, just enter any password for Encrypt and leave the rounds at 12. Enter the hashed key as BACKDOOR_HASH_KEY. This password is used for secure communication between webservices and the api.

And run the image:

docker run --restart unless-stopped -d --name maphynn_api --net=web -v /docker/.env:/usr/src/app/.env -v /docker/api/images/profile_pictures:/usr/src/app/static/images/profile_pictures -p 8083:1234 registry.maphynn.de/maphynn/api:development

The webservice also has a .env:

PORT=1234
API_ADDRESS=your adress
LOCAL_KEY=WOW

In the LOCAL_KEY, enter the clear password which you have encrypted with Bcrypt.

and can be started with this command:

docker run --restart unless-stopped -d --name maphynn_webservice --net=web -v /docker/.webenv:/usr/src/app/.env -p 8082:1234 registry.maphynn.de/maphynn/webservice:development

Now we can start the website:

docker run --restart unless-stopped -d --name maphynn_website -p 8081:80 registry.maphynn.de/maphynn/website:development

To run the App go to https://app.maphynn.de/#/app/ and download the latest development.apk and install in on an Android phone. You have to allow installation from an unknown source first.

Metrics (e.g. a bunch of numbers and graphs that look cool (hopefully))

Ladies and gentleman, today I want to introduce you to our metrics system. We are using Sonarqube to analyse where our code lacks attention. Sonarqube is quickly set up inside a docker container (maphynn_sonarqube) and provides support for several languages. As such, typescript wich is used by our backend. But plugins can also provide metrics for other languages. We used this plugin for our frontend which uses flutter and dart. The sonar-scanners are run during the ci/ce pipline on gitlab for the dev branches. The analysis of the website is currently not working because we do not have any unittests and the sonar-scanner expects test and coverage outputs.

Now lets take a look at how metrics improved our code. I will take the app for an example. First there were approx. 900 code smells ?! What in the world have I done? … well maybe it isn’t entirely my fault… so I think the plugin for dart enables EVERY dartanalyzer rule. Some of the even are contradictory -> Use final modifier for variables that are only assigned once. OK did that, but hey don’t do it for variabels in methods… but you told me to … anyway. It also desn’t like JSON strings no idea why… Anyway after a bit of work we are down to 400ish. And I will from now on use final and write some comments.

Decreasing number of code smells (and bugs).
The use of the modifier final was probably the biggest concern

The following is an example where I decided not to change anything:

I don’t see why I wouldn’t want do that. Sure the name of the variable implies the type but it also doesn’t hurt.

For the Api there were security issues which turned out to be the ‘hardcoded’ passwords for the api documentation so nothing to worry about.

So this is our metrics system. Hope you find this interesting and see you guys next week!

P.S. if anybody also uses sonarqube and has sucessfully activated the authentification using their gitlab, please tell me how. Somehow gitlab appends a “/gitlab” to the end of the redirecion url and it doesn’t work.

P.S.P.S. I think it might be better to set up metrics earlier when you have less code. It might be a lot of set-up in the beginning, but you won’t be overrun by things you did wrong the last year.

Design Patterns

The topic of this weeks blogpost are Design Patterns.
The definition at the beginning of the wikipedia article states following:

In software engineering, a software design pattern is a general, reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. Rather, it is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system.

We did research about different design patterns, like the factory and the observer pattern. But we could not really find a point in our source code where we could apply one of these more common design patterns in a meaningful way.
The reason for this is that we are taking big advantage of the functionalities that our frameworks are providing us. In the case of our frontend this would be Flutter and Angular Dart and in the backend things like Socket IO and Sequelize. This has the result that we are only writing little pieces of independent code. It was difficult to find a Design Pattern with which we could improve our code.

The mentioned frameworks are already providing us with different design patterns, which we didn’t have to implement on our own. The first example for this is the observer pattern, which is provided by Socket IO. The observer pattern is pretty simple and basically consists of two different parts, observers and subjects. The observer subscribes to different subjects of which he wants to gets updates, when changes happen. One observer can subscribe to several different subjects and every subject can have many observers. In our case do we only have one subject which is our webservice on the server. Every client who usese our service ans is logged in subscribes to this subject.

In the end we chose to implement the module pattern in our API-code, in a simplified way. We found that we were using the same function call for Sequelize at different points in our code. To increase the maintainability of our code we moved it into a separate function. Because of this we only had to change the call at this one point, when necessary. For now this seems to be only a minor problem because our project is still pretty easy managable at this size. But as our codebase grows, little things like these are getting more important.

In the following you can see the difference that this change caused in our class diagram. The difference is pretty small, because we only did a minor change to the code.

Old version on the left and new version on the right.

And here you can see the changes we did in our code:

Old code
New code

Best regards,
MAPHYNN-Team

Refactoring

Hello everybody,

this week’s task was to refactor a deliberately badly written code. To improve the code, we added JUnit tests and restructured the code using the IDE tools of the IDE we prefer.

These are the links to our Git repositories:

Below you will find a little note from each of us, how their IDE helped us to do the Refactoring.

Tim and Marvin:
We have used IntelliJ to do the refactoring and could automate nearly every step of it which spared time and left less room for failure, which we didn’t expect to this extend beforehand.

Felix:
I used eclipse to refactor since i was already working with it. It helped a lot with the refactoring and even though it sounds very subtle, auto-formatting helps a lot.

Clemens:
In my case I have tried refactoring in Visual Studio code, as I do a lot of projects using this IDE. VSC offers a very wide range of functions for refactoring and if that’s not enough for you, you can choose from countless plugins. For more information about refactoring in VSC just click here.

Greetings from your Maphynn Team !

Function Points

The topic of the Blogpost for this week is the calculation of Function Points for past and future use cases.
Function points are a measurement unit for software components. It is possible to evaluate the complexity of a chunk of software with them. If you want to learn more details about function points and how to calculate them then we can recommend the following YouTube channel:
https://www.youtube.com/user/functionpoints

Here you can see an example of our calculations:

You can find these calculations in every use case document of our project.
Here is a link to the example above:
https://gitlab.maphynn.de/maphynn/maphynn/-/blob/master/uc_documents/UC_Register.md

All our calculations are also collected in the following Google sheet:
https://docs.google.com/spreadsheets/d/1OzQiNqI7TsXGBPpmqOgo2JN-4pYjRQ1Ihxfl0vGNjc4/edit?usp=sharing

But what do you do after collecting all this data? We used the function points to improve our time estimations for future use cases. To accomplish this we created the following graph based on the table below:

Time spent per Function Point
Function Points table

Detailed view:
https://drive.google.com/file/d/1g_NpCGTHKUJMujEwyipfvr1aCTgT0jE7/view?usp=sharing

Based on the use cases from last semester we estimated a trendline that correlates the calculated function points with the spent time. Based on that we can now approximate the time needed for future use cases. As you can see some of the blue points are very far away from the trendline, like the use cases “Login” (Log), “Register” (Reg). We needed more time to complete these use cases because they were the ones we started to work on first. At this time we had zero to no experience in working with these technologies. Thats why we needed longer to complete them. “Add/Remove Friend” (ARF) on the other hand took less time, relative to the other use cases. This was the case because we were able to copy some things from past use cases and didn’t have to figure everything out from scratch. One of theses things was, how to send an http-request in Dart and how to work with the received answer, for example.

With the estimated time and the data from the last semester we are now able to estimate the time we have to invest in our project this semester, if we want to fulfill our goals we set.
You can see our spent time per workflow in the picture below. In an earlier blogpost we explained what these workflows are. The yellow area is our spent time on implementing our practical use cases. We marked the end of the first semester with the black arrow. There you can see that we spent around 40% of our time on implementing features. Because we also did much setup work, which is now already done, we expect to spend 50% of our time implementing in the future.

Here is also a link to our chart in YouTrack.
https://youtrack.maphynn.de/reports/cumulativeFlow/133-2

Thanks for reading this blogpost. We would really appreciate if we would get feedback from you.

Based on that we get an estimate of about 60 workhours needed for implementation and if we double this do we get an estimation of 120h that we will have to invest. This seems to be a managable amount and we will see if this is how the reality looks like.

With best regards,
the MAPHYNN team