With current deep learning libraries it is easy to make a model more complex by adding more components, layers and optimization tricks. However, when you make a change to the code or model, you should have at least an intuition for why this change should help. Likewise, when you run an experiment, you should have a clear expectation of its outcome. What do you expect the plotted results to look like, and what will they tell you? This is even more important when you find yourself in a situation where your model is not doing what it is supposed to do. Then it is more likely that you are currently seeing the symptoms of a bug, so extending your model will not help you find that bug and might even make it harder to isolate the problem. Before making your model more complex, get to the bottom of what might be wrong with it. Moreover, keep in mind that in your report you will have to justify what you did. An assessor of your report is interested in understanding your thought process. If you cannot formulate a research hypothesis and explain to yourself why what you are doing should work, then chances are good that neither can anyone else.