So we got a huge project to work on, Magento 2 migration. The existing store was highly customised. Team consisted of 6 people, 3 developers, 1 QA, a theme designer and a “Database guy”. Our job was to move customer to M2 with all their existing functionality and data. Since the store was really old, there was no clear requirement document.
And we began with git as our primary tool. We setup three servers, dev environment, staging environment and customer environment. We agreed on this cycle of any feature :
- develop it on dev
- push it to dev enviroment
- developer should test it on dev env
- push it on staging for QA team to test and get a sign off
- push it on customer env, do sanity tests and let customer test out the feature
It felt perfect, minimalistic, just git and no other tools. Automation tests and other “dev-ops” stuff felt overkill. We have a great QA guy so we were relaxed.
Next few weeks were like this.
I develop a feature. QA finds a bug and raises it on JIRA. I fix it and QA closes the issue after re-test. Only after a week, our managers ask us for a demo , they find exact same issue. They yell at QA and tell him what a shitty job he is doing and everybody is sad. Later on I find that I’ve reintroduced the bug while building new feature. The poor QA guy has to bare the blame. At least in this case I felt bad for the QA guy because he is good friend :D.
Now imagine this. 20 different modules, no robust plan for dependency management. Each time you push a change, there is good chance you have broken something. To give you a realistic example, a fellow developer was working on a new entity grid and he introduced an override in di.xml. QA tested his grid feature and it was working perfectly fine. Only later, did we discover, that same change broke sales order grid page.
After explaining this to managers, they decided, QA and developers must test the whole system before merging any commit and pushing any code. You can imagine what this does to deliver time. And humans tend to make more mistakes when they are bored. Doing Regression testing is BORING. Going to checkout, placing orders in different scenarios, testing ebay integration , amazon integration, layered navigation and hundred other things. Thats painful. So our manual testing only made us more oblivious to bugs. I wondered how Magento team managed this mess.
I’ve contributed to Magento Opensource so I know they have a very sophisticated environment setup for merging pull requests and testing the whole thing on every small change. I never understood it’s necessity until I started working on this huge project. Now the question was do we invest time in setting up such an environment and if yes then how do I convince my managers to get on board?
What are the challenges in setting up an automation test environment?
- Convince your managers and peers to get on board
- Train fellow developers and QA to use automation testing
- Setup a environment which will trigger those tests on every commit
Well, I told managers that regression testing is going to postpone release date. We created a small automated test to show them what it could do. This was a acceptance test written using Codeception. Lucky for us, around same time, Magento released their own framework named “Magento Functional Tests Framework” also known as MFTF. This framework is targeted towards QA’s. It makes writing tests painless. We finally got the approval to invest some time on this. During this time MFTF’s documentation wasn’t clear. Hence we learned the damn thing by reading source code. It was worth the effort.
We wrote around 30 something huge tests that did end to end testing from user perspective. What do these tests look like?
Well, when you start the test, a separate instance of chrome is fired up. The mouse starts clicking around. Adding product, clicking on proceed to checkout and actual checkout happens automatically. Then the script opens up backend to verify that order was placed and order information is correctly displayed. These verifications are known as “Assertions”. One tests can have lot of verifications. In one test you could login as a customer, add a product to wishlist, test the whole checkout, go to backend. This is absolutely amazing.
As of this writting we run our tests manually each day. I know we havn’t achieved the set ideal but this is good enough. Our tests fail regularly and fix things before they go to customers. This is especially useful while doing integration. Take a look following video and see if MFTF interests you.
Notice at the very top chrome says “ chrome is being controlled by automated software”. In terminal (command line) I start the test. After first test goes green I kill the process to skip other tests. Enjoy the video and let me know your thoughts.