Attached is a rather large patch that attempts to introduce the Pytest test runner and Grappah test matcher into our code base. The patch replaces all of the previous python unit tests from the previous design. This is a follow-up from our previous proof of concept and discussion around the idiosyncrasies of our existing test suite.
We are motivated to submit this change for the following reasons:
1. Having a highly customized test suite increases the learning curve towards contributing to the code base. Pytest is a mature stable python test framework that outside developers are more likely to be familiar with.
2. Pytest supports a lot of the features that we've built for our use-cases - out of the box, and with extensive documentation on the matter.
3. The lack of ability to run tests individually has become a pain point. Especially when it comes to debugging long running feature tests.
Test can be ran with the following command:
Please let us know what you think and if there are any issues!
Note: The feature tests are not yet completed. We expect our CI to fail as a result of this patch. We will complete this step soon in a follow-up patch!
Anthony && Joao && Victoria
0001-Use-Pytests.diff (1M) Download Attachment
On Wed, May 23, 2018 at 10:48 PM, Anthony Emengo <[hidden email]> wrote:
(pgadmin4) piranha:web dpage$ yarn test:unit
yarn run v1.3.2
$ yarn run linter && pytest -q pgadmin
$ yarn eslint --no-eslintrc -c .eslintrc.js --ext .js --ext .jsx .
$ /Users/dpage/git/pgadmin4/web/node_modules/.bin/eslint --no-eslintrc -c .eslintrc.js --ext .js --ext .jsx .
Traceback (most recent call last):
File "/Users/dpage/.virtualenvs/pgadmin4/lib/python2.7/site-packages/_pytest/config.py", line 371, in _importconftest
mod = conftestpath.pyimport()
File "/Users/dpage/.virtualenvs/pgadmin4/lib/python2.7/site-packages/py/_path/local.py", line 668, in pyimport
File "/Users/dpage/git/pgadmin4/web/pgadmin/__init__.py", line 29, in <module>
from pgadmin.model import db, Role, Server, ServerGroup, \
ImportError: No module named pgadmin.model
ERROR: could not load /Users/dpage/git/pgadmin4/web/pgadmin/conftest.py
error Command failed with exit code 4.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Please don't forget to update the README and top level Makefile as well! I'll probably have to tweak my Windows CI jobs, but the Linux/Mac ones all use the Makefile (which is also hard-coded into my fingers)!
Here’s a followup patch with the relevant README and Makefile changes. To be clear, both patches need to be applied in succession to run the tests.
The error that you were running into was because the appropriate
0002-Use-Pytests.diff.ci-skip (3K) Download Attachment
On Thu, May 24, 2018 at 5:14 PM, Anthony Emengo <[hidden email]> wrote:
That really needs to be fixed then. I shouldn't need to do anything beyond activate the appropriate virtual environment (which I had done).
As part of the development environment we do not see the reasoning behind not add PYTHONPATH to the environment variables, specially because this looks like the way pytest was invisoned.
However please try the following patch instead. We've changed the pytest invocation to assume the relevant dir as part of the directories to load, as well as the docs and Makefile
Joao && Anthony
On Thu, May 24, 2018 at 12:24 PM Dave Page <[hidden email]> wrote:
0002-Use-Pytests-v2.git-diff (4K) Download Attachment
On Thu, May 24, 2018 at 8:36 PM, Joao De Almeida Pereira <[hidden email]> wrote:--
Really? It's one more step that wasn't previously required, and for which there is no good reason when running in a properly configured virtual environment. Not only that, but PYTHONPATH is typically used as a search path to find modules on which the application/script is dependent - not to find the application/script itself. Unconditionally overriding it is likely to lead to problems in some possible configurations (at the very least I would expect to see the Makefile do PYTHONPATH=$PYTHONPATH:$(PWD)/web).
Some initial feedback:
- The JSON results are no longer being generated.
- The output is *extremely* verbose, showing a lot of seemingly unnecessary info (do we really need to output the source code of failing test cases?). I would suggest that the verbose output be directed to the log, and the visible output should be much more terse along the lines of what it is now.
- There is no summary at the end showing what passed, failed or was skipped. To figure out what to look at, I have to trawl through over 13K lines of output (642KB).
- 69 tests failed for me running test:unit. They were previously all passing.
- It is a *lot* faster - not sure if that's a result of tests failing, but I expect not entirely.
- /README was updated, but not /web/regression/README
Good point we didn't consider the possibility of someone developing multiple python apps in the same machine.
We will revisit this and start using
python -m pytest
as that should solve the problem
That is a flag that we didn't add to the script bug we will review the patch with that
We can see what we can do about this point
If we use the flag -q the output is smaller. We can add that too
Can you provide some log of the failing tests?
Out lapse, will do that
0001 - Convert all the Unit Tests to pytest/grappa(Patch previously sent, with some tests that were not running previously
From now on, we can launch 1 test at a time if we want to by using pytest. (In this gist https://gist.github.com/kwmiebach/3fd49612ef7a52b5ce3a there are some examples on how to use it) Because we are using the pytest CLI all the options are made available to use, the community also as a ton of plugins if we ever need something else.
We updated all README’s with the following information as well.
As we discussed previously in order to use the pytest binary directly you need to point the
We also changed the feature test, they now are under
The boxes that we are pointing with the teal arrow represent 1 Python Version Against 1 Database. This can be accomplished using Jenkins as well. In order to accomplish that we are going to generate e Jenkinsfile(https://jenkins.io/doc/book/pipeline/jenkinsfile/) to do this in a more DSL manner, making it easier for us to deploy the pipeline in a more consistent way. The LTS version of jenkins is 2.107.3 the version that we have in CI should be ok, but is always good to be in a LTS version of Jenkins or newer because the community is pretty active.
On Fri, Jun 1, 2018 at 4:52 PM, Joao De Almeida Pereira <[hidden email]> wrote:
Any guesses as to the ETA? Given that most of our dev, and our Windows and Mac packages both run on 2.7 at the moment, it's clear that this is a required fix before we can proceed.
My guess is that you ran them without having an MTA running on the test machine.
Well the current public Jenkins system is going away soon, and the new one has had a ton of jobs created that assume the tests run in series automatically against each database server (and is completely different from the current system). I do plan to change that in the longer term, but it requires a change to the way I've been using (and know how to use) Jenkins.
However, the bigger problem for me is that I often run the tests against multiple DB servers on my laptop, without Jenkins. How would that workflow look now?
Attached you can find the patch that bumps grappa to version 0.1.9 that support Python 2.7
Do you have a link to the new CI that you can share with us so we can take a pick. We can give some advises on what we believe it is a a good flow for the Continuous Delivery pipeline.
We have some examples from our pipeline that we can share:
Script that we use to run the UT + Feature tests on a docker image that has python+yarn+selenium+postgres installed on it:
This type of scripts can be added to the Jenkinsfile to create a pipeline step. A good practice in a reproducible pipeline is to use Docker to ensure that every test runs in a clean and predictable environment, this make it easy to reproduce a problem found in testing.
We understand that and it looks like an interesting use case and it could be accomplished by creating a script of something similar that launch the tests multiple times. That is something that we can schedule as a future steps after this patch is reviewed and committed.
Victoria & Joao
0004-Upgrade-grappa-to-support-python-2.7.patch (562 bytes) Download Attachment
On Mon, Jun 4, 2018 at 3:26 PM, Joao De Almeida Pereira <[hidden email]> wrote:
I'll take a look.
No, because it's firewalled to the nines inside our network. There's no chance I'm making production build machines internet-accessible.
Docker is of little use to us here, as 2 of the 4 build platforms cannot be run in Docker (macOS and the Docker container), and the 3rd would be extremely difficult to run that way (Windows)
We don't commit patches that remove functionality that people use regularly, even if it's only intended to be temporary - that's simply not how we do things in the community, as the risk of the second part not being done is too high.
In reply to this post by Joao De Almeida Pereira
On Mon, Jun 4, 2018 at 3:26 PM, Joao De Almeida Pereira <[hidden email]> wrote:
I thought I had replied to this, but apparently not. My apologies...
This does seem to fix the issues with Python 2.7. The remaining issues I saw are:
- There are a handful of failures with PG 11, however that's not yet supported and Khushboo is working on it. It shouldn't hold up this patch - just mentioning for completeness.
- I get the following failure (which is easily fixed in the config file, though that of course, shouldn't be necessary):
E AssertionError: Oops! Something went wrong!
E The following assertion was not satisfied
E subject "/Library/PostgreSQL/ ..." should be equal to "/Library/PostgreSQL/ ..."
E What we expected
E a value that is equal to "/Library/PostgreSQL/ ..."
E What we got instead
E an value of type "str" with data "/Library/PostgreSQL/ ..."
E Difference comparison
E > - /Library/PostgreSQL/tablespaces/9.4
E > + /Library/PostgreSQL/tablespaces/9.4/
E > ? +
E File "/Users/dpage/git/pgadmin4/web/pgadmin/browser/server_groups/servers/tablespaces/tests/test_tbspc_get.py", line 75, in test_tablespace_get
Attached are four different patches that should be applied in order. Included is a fix for the failing test above.
Victoria && Joao
On Mon, Jun 11, 2018 at 11:01 AM Dave Page <[hidden email]> wrote:
In reply to this post by Dave Page-7
For a Open Source project, if the community cannot see the place where the tests are running it looses a huge part of the process. We believe that removing this capability will have a negative impact on the development, especially because we do not have a CI/CD. Before the code is merged we will never know if master is broken or not.
The docker files that we are talking about here is to run the tests, and we believe that the tests are all running in a Linux environment.
Victoria && Joao
Screen Shot 2018-06-01 at 3.36.37 PM.png (366K) Download Attachment
On Tue, Jun 12, 2018 at 8:39 PM, Victoria Henry <[hidden email]> wrote:
This is one of multiple CI/CD systems; specifically it's the one that will be producing our official builds. Security always trumps developer convenience for build machines as far as I'm concerned - have you ever had to deal with a possibly-compromised build server and the PR fallout etc that can result from that?
The existing public system will be replaced with one that mirrors a portion of the build system; specifically, the test jobs, but not the production build jobs or dependencies. It may be removed completely in the future if I can find a way to securely allow access to build info from the build system.
No, I'm running tests on all platforms. We've proven numerous times that just running them on Linux is not enough.
In reply to this post by Victoria Henry
On Tue, Jun 12, 2018 at 3:41 PM, Victoria Henry <[hidden email]> wrote:
This is no longer failing :-)
However, I'm still seeing problems that need to be resolved:
- Passwords are still being written to the log file. This is a complete show-stopper.
- The regression.log file is empty. I'm now only getting the JSON summary. We really need both - the regression log for diagnostics, and the JSON file for feeding into monitoring systems.
- There is still no summary list of tests that failed or were skipped output at the end of the run. This is important because it highlights which tests failed (obviously!) and need investigation, and knowing which ones are skipped at a glance is very useful as they can be skipped due to mis-configuration; e.g. running against a different server than intended, or failing to specify something like a tablespace path which we can then easily see to fix.
- I cannot run against servers other than the first one in the config file. We need to run against all that are enabled, and have the output show which one was used for each one as we have now. This may seem trivial, but it's really not - to the point that for some of us it negates the whole point of the patch by giving us more control over which tests are run but removing all control over which database servers they will run against. In my case, this will almost certainly cost even more time; right now I just have to run more tests than I may need but I can do other things at the same time. With this patch I'll have to keep re-running the tests, modifying the config between each run.
The supported versions of
Pytest: Python 2.7, Python 3.4+, PyPy 2.3, Jython 2.5 (untested);
Grappa: Python 2.7+, 3+
So, how are we going to handle unsupported versions (PY 2.6 and 3.3) ?
On Wed, Jun 13, 2018 at 6:07 PM, Dave Page <[hidden email]> wrote:
|Free forum by Nabble||Edit this page|