Before this patch, we were reading in old coverage data every
time we ran test-main and had a .coverage file lying around.
This would cause inaccurate data when you changed code, and it
would cause crashes if you moved your working directory on the
file system.
unittest includes by default all module-level classes that inherit
from TestCase and implement at least one method starting with 'test'.
Since it doesn't provide a convenient way for excluding TestSuites,
we need to manually filter out the unwanted testing of our test base
class itself.
This removes the excessively verbose lists of files
to be tested, and flushes the output after every print
to update the user on the current status in real time.
This commit adds a script to automate the PyPA release of the
zulip, zulip_bots and zulip_botserver packages.
The tools/release-packages script would take care of uploading
the packages to PyPA, and push commits to both repos updating the
package versions. If you have commit access to the repos, you
can --push upstream to master. If not, then you can --push
origin to a new branch on your fork and create a PR for those
changes.
Ideally, a release shouldn't take longer than however long it
takes one to type the above command. If you have SSH set up on
GitHub, you won't need to type in your GitHub username and
password. You can also store your PyPA credentials in a file
in your home directory; it isn't very secure, but it saves
time nevertheless.
We now have a custom command in zulip_bots/setup.py to generate
a MANIFEST.in. To generate a MANIFEST for a PyPA release, we
can now run:
python setup.py gen_manifest --release
To generate a non-release MANIFEST, we can run:
python setup.py gen_manifest
This allows us to automate the MANIFEST generation in our
release automation script.
This bot depends on PyDictionary, which isn't very well-implemented
or well-maintained. PyDictionary's dependency on goslate and
goslate's dependency on concurrent.futures has been known to cause
problems in Python 3 virtualenvs. This bot has also been the
source of disruptive BeautifulSoup warnings. Since this bot is only
meant to be an example bot, and for all the above reasons,
it makes sense to remove this bot. The cons of debugging the above
issues outweight the pros of having the bot at all.
As a package maintainer, I have to exclude the test fixtures in
MANIFEST.in so that they aren't shipped with the package release.
But for the repo, we need to include fixtures, logos and docs so
that Travis can run the tests after running
`pip install ./zulip_bots`.
Also, since we are installing zulip_bots off of this repo in our
main repo, docs and logos should be included so that they can be
rendered alongside our webhooks/integrations documentation, so we
need to include them in MANIFEST.in as well.
To automate this process, I just wrote this handy little script
that future bot contributors can run instead of having to manually
specify what to include in MANIFEST.in in the repo.
doc.md better describes the style of documentation that will live
inside these files, since we want these to be similar to our
webhooks' doc.md files in terms of how these are rendered and
composed of Markdown macros.
Instead of discovering unit tests using loader.discover() by passing
it a set of starting and top level directories, we now discover
unit tests by loading them from specific test module objects. This
makes it easier to include and exclude specific bots from testing.
In order to keep all three packages (zulip, zulip_bots,
zulip_botserver) in the same repo, all package files must now
be nested one level deeper.
For instance, python-zulip-api/zulip_bots/zulip_bots/bots/, instead
of python-zulip-api/zulip_bots/bots/.
tools/server_lib contains files copied as-is, or with minor
modifications, from the zulip/zulip repository when the api
code was split into this separate repository.