Update the developer guide

This commit is fixing all non-functional links and updates the content to more up to date state.

It also removes fedmsg from the guide.

Signed-off-by: Michal Konečný <mkonecny@redhat.com>
This commit is contained in:
Michal Konečný 2022-04-14 17:29:46 +02:00
parent 50d59f41df
commit 6bd7ef5065
13 changed files with 164 additions and 220 deletions

View file

@ -7,7 +7,6 @@
** xref:db.adoc[Databases]
** xref:writing-tests.adoc[Tests]
** xref:auth.adoc[Authentication]
** xref:fedmsg.adoc[fedmsg]
** xref:messaging.adoc[Messaging]
** xref:sops.adoc[Developing Standard Operating Procedures]
** xref:source_control.adoc[Source Control]

View file

@ -1,14 +1,11 @@
== Authentication
Fedora applications that require authentication should support, at a
minimum, authentication against https://ipsilon-project.org/[Ipsilon].
Ipsilon is an Identity Provider that uses a separate Identity Management
system to perform authentication. In Fedora, Ipsilon is currently backed
by the https://admin.fedoraproject.org/accounts/[Fedora Account System].
In the future, it will be backed by http://www.freeipa.org/[FreeIPA].
Fedora applications that require authentication should support
https://accounts.fedoraproject.org/[Fedora Account System] backed
by http://www.freeipa.org/[FreeIPA] as an identity provider.
Ipsilon supports
https://openid.net/specs/openid-authentication-2_0.html[OpenID 2.0],
https://accounts.fedoraproject.org/[Fedora Account System] supports
https://github.com/fedora-infra/fasjson/[fasjson] as read-only API,
https://openid.net/connect/[OpenID Connect],
https://tools.ietf.org/html/rfc6749[OAuth 2.0], and more.
@ -17,7 +14,6 @@ https://tools.ietf.org/html/rfc6749[OAuth 2.0], and more.
All new applications should use OpenID Connect for user authentication.
[NOTE]
.Note
====
Many existing applications use OpenID 2.0 and should eventually migrate
to OpenID Connect.
@ -37,13 +33,11 @@ https://fedoraproject.org/wiki/Infrastructure/Authentication[Authentication
Wiki page].
[WARNING]
.Warning
====
OpenID Connect
https://openid.net/specs/openid-connect-core-1_0.html#AuthRequest[requires
that the "openid" scope is requested]. Failing to do so will result in
undefined behavior. In the case of Ipsilon, you won't have access to the
UserInfo or recieve an ID token.
undefined behavior.
====
=== Libraries

View file

@ -15,7 +15,6 @@ documented and continuous integration tests should ensure code is
correctly styled before merging pull requests.
[NOTE]
.Note
====
There are a few PEP8 rules which will vary from project to project. For
example, the maximum line length might vary. The test suite should

View file

@ -4,7 +4,7 @@ We use PostgreSQL throughout Fedora Infrastructure.
=== Bi-directional Replication
http://bdr-project.org/docs/stable/index.html[Bi-directional
https://www.enterprisedb.com/docs/bdr/latest/[Bi-directional
replication] (BDR) is a project that adds asynchronous multi-master
logical replication to PostgreSQL. Fedora has a PostgreSQL deployment
with BDR enabled. In Fedora, only one master is written to at any time.
@ -24,8 +24,8 @@ All tables need to have primary keys.
BDR does not use any consensus algorithm or locking between nodes so
writing to multiple masters can result in
http://bdr-project.org/docs/stable/conflicts.html[conflicts]. There are
several types of conflicts that can occur, and applications should
https://www.enterprisedb.com/docs/bdr/latest/conflicts/[conflicts].
There are several types of conflicts that can occur, and applications should
carefully consider each one and be prepared to handle them. Some
conflicts are handled automatically, while others can result in a
deadlock that requires manual intervention.
@ -33,7 +33,7 @@ deadlock that requires manual intervention.
==== Global DDL Lock
BDR uses a
link:bdr-project.org/docs/stable/ddl-replication-advice.html[global DDL
https://www.enterprisedb.com/docs/bdr/latest/ddl/[global DDL
lock] (across all PostgreSQL nodes) for DDL changes, which applications
must explicitly acquire prior to emitting DDL statements.
@ -103,7 +103,7 @@ nodes may perform any DDL or make any changes to rows.
==== DDL Restrictions
BDR has a set of
http://bdr-project.org/docs/stable/ddl-replication-statements.html#DDL-REPLICATION-PROHIBITED-COMMANDS[DDL
https://www.enterprisedb.com/docs/bdr/latest/ddl/#bdr-ddl-command-handling-matrix=[DDL
Restrictions]. Some of the restrictions are easily worked around by
performing the task in several steps, while others are simply not
available.

View file

@ -8,26 +8,27 @@ development environments.
=== Ansible
link:#ansible[Ansible] is used throughout Fedora Infrastructure to
https://www.ansible.com/[Ansible] is used throughout Fedora Infrastructure to
automate tasks. If the project requires anything more than a Python
virtual environment to be set up, you should use Ansible to automate the
setup.
=== Vagrant
link:#vagrant[Vagrant] is a tool to provision virtual machines. It
https://vagrantup.com/[Vagrant] is a tool to provision virtual machines. It
allows you to define a base image (called a "box"), virtual machine
resources, network configuration, directories to share between the host
and guest machine, and much more. It can be configured to use
link:[libvirt] to provision the virtual machines.
https://libvirt.org/[libvirt] to provision the virtual machines.
You can install link:#vagrant[Vagrant] on a Fedora host with:
You can install https://vagrantup.com/[Vagrant] on a Fedora host with:
....
$ sudo dnf install libvirt vagrant vagrant-libvirt vagrant-sshfs
....
You can combine your link:[Ansible playbook] with link:#vagrant[Vagrant]
You can combine your https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks[Ansible playbook]
with https://vagrantup.com/[Vagrant]
very easily. Simply point Vagrant to your Ansible playbook and it will
run it. Users who would prefer to provision their virtual machines in
some other way are free to do so and only need to run the Ansible
@ -38,11 +39,14 @@ playbook on their host.
How a project lays out its development-related content is up to the
individual project, but a good approach is to create a `devel`
directory. Within that directory you can create an `ansible` directory
and use the layout suggested in the link:[Ansible roles] documentation.
and use the layout suggested in the
https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html[Ansible roles]
documentation.
====
Below is a Vagrantfile that provisions a Fedora 25 virtual machine,
updates it, and runs an Ansible playbook on it. You can place it in the
Below is a Vagrantfile that provisions a Fedora 34 virtual machine, updates it,
mounts the current folder as `/home/vagrant/devel`, and runs an Ansible playbook
from `devel/ansible` on it. You can place it in the
root of your repository as `Vagrantfile.example` and instruct users to
copy it to `Vagrantfile` and customize as they wish.
@ -50,21 +54,58 @@ copy it to `Vagrantfile` and customize as they wish.
----
# -*- mode: ruby -*-
# vi: set ft=ruby :
#
# Copy this file to ``Vagrantfile`` and customize it as you see fit.
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# If you'd prefer to pull your boxes from Hashicorp's repository, you can
# replace the config.vm.box and config.vm.box_url declarations with the line below.
#
# config.vm.box = "fedora/25-cloud-base"
config.vm.box = "f25-cloud-libvirt"
config.vm.box_url = "https://download.fedoraproject.org/pub/fedora/linux/releases"\
"/25/CloudImages/x86_64/images/Fedora-Cloud-Base-Vagrant-25-1"\
".3.x86_64.vagrant-libvirt.box"
config.vm.box = "fedora/34-cloud-base"
# Forward traffic on the host to the development server on the guest.
# You can change the host port that is fo
# Forward traffic on the host to the development server on the guest
# RabbitMQ
config.vm.network "forwarded_port", guest: 15672, host: 15672
# Vagrant can share the source directory using rsync, NFS, or SSHFS (with the vagrant-sshfs
# plugin). By default it rsyncs the current working directory to /vagrant.
#
# If you would prefer to use NFS to share the directory uncomment this and configure NFS
# config.vm.synced_folder ".", "/vagrant", type: "nfs", nfs_version: 4, nfs_udp: false
config.vm.synced_folder ".", "/home/vagrant/devel", type: "sshfs"
# To cache update packages (which is helpful if frequently doing `vagrant destroy && vagrant up`)
# you can create a local directory and share it to the guest's DNF cache. The directory needs to
# exist, so create it before you uncomment the line below.
#
# config.vm.synced_folder ".dnf-cache", "/var/cache/dnf", type: "sshfs", sshfs_opts_append: "-o nonempty"
# Comment this line if you would like to disable the automatic update during provisioning
config.vm.provision "shell", inline: "sudo dnf upgrade -y"
# bootstrap and run with ansible
config.vm.provision "ansible" do |ansible|
ansible.playbook = "devel/ansible/vagrant-playbook.yml"
ansible.raw_arguments = ["-e", "ansible_python_interpreter=/usr/bin/python3"]
end
# Create the "hotness" box
config.vm.define "hotness" do |hotness|
hotness.vm.host_name = "hotness-dev.example.com"
hotness.vm.provider :libvirt do |domain|
# Season to taste
domain.cpus = 4
domain.graphics_type = "spice"
domain.memory = 2048
domain.video_type = "qxl"
# Uncomment the following line if you would like to enable libvirt's unsafe cache
# mode. It is called unsafe for a reason, as it causes the virtual host to ignore all
# fsync() calls from the guest. Only do this if you are comfortable with the possibility of
# your development guest becoming corrupted (in which case you should only need to do a
# vagrant destroy and vagrant up to get a new one).
#
# domain.volume_cache = "unsafe"
end
end
end
----

View file

@ -19,7 +19,6 @@ http://www.sphinx-doc.org/[Sphinx] includes several extensions to turn
Python documentation into HTML pages.
[NOTE]
.Note
====
Improving documentation is a great way to get involved in a project.
When adding new documentation or cleaning up existing documentation,
@ -31,8 +30,8 @@ please follow the guidelines below.
Sphinx supports three different documentation styles. By default, Sphinx
expects ReStructuredText. However, it has included an extension to
support the
http://www.sphinx-doc.org/en/1.5.2/ext/example_google.html[Google style]
and the http://www.sphinx-doc.org/en/1.5.2/ext/example_numpy.html[NumPy
https://www.sphinx-doc.org/en/master/usage/extensions/example_google.html#example-google[Google style]
and the https://www.sphinx-doc.org/en/master/usage/extensions/example_numpy.html#example-numpy[NumPy
style] since version 1.3. The style of the documentation blocks is left
up to the individual project, but it should document the choice and be
consistent.
@ -87,12 +86,12 @@ feature. This will generate links between objects in the documentation.
==== HTTP APIs
Many projects provide an HTTP-based API. Use
http://pythonhosted.org/sphinxcontrib-httpdomain/[sphinxcontrib-httpdomain]
https://sphinxcontrib-httpdomain.readthedocs.io/en/stable/[sphinxcontrib-httpdomain]
to produce the HTTP interface documentation. This task is made
significantly easier if the project using a web framework that
http://pythonhosted.org/sphinxcontrib-httpdomain/[sphinxcontrib-httpdomain]
https://sphinxcontrib-httpdomain.readthedocs.io/en/stable/[sphinxcontrib-httpdomain]
supports, like Flask. In that case, all you need to do is add the
http://pythonhosted.org/sphinxcontrib-httpdomain/[sphinxcontrib-httpdomain]
https://sphinxcontrib-httpdomain.readthedocs.io/en/stable/[sphinxcontrib-httpdomain]
ReStructuredText directives to the functions or classes that provide the
Flask endpoints.
@ -131,18 +130,9 @@ Release Notes
....
Then each commit can add a file in the `news` folder to document the
change. The file has the `source.type` name format, where `type` is one
change. The file has the `source.type` name format, where `source` is one
of:
* `feature`: for new features
* `bug`: for bug fixes
* `api`: for API changes
* `dev`: for development-related changes
* `author`: for contributor names
* `other`: for other changes
And where the `source` part of the filename is:
* `42` when the change is described in issue `42`
* `PR42` when the change has been implemented in pull request `42`, and
there is no associated issue
@ -151,6 +141,15 @@ and there is no associated issue or pull request.
* `username` for contributors (`author` extention). It should be the
username part of their commits' email address.
And where the `type` part of the filename is:
* `feature`: for new features
* `bug`: for bug fixes
* `api`: for API changes
* `dev`: for development-related changes
* `author`: for contributor names
* `other`: for other changes
A preview of the release notes can be generated with
`towncrier --draft`.

View file

@ -1,75 +0,0 @@
== fedmsg
https://fedmsg.readthedocs.io/[fedmsg] is a ZeroMQ-based messaging
library used throughout Fedora Infrastructure applications. It uses a
publish/subscribe design so applications can decide what messages
they're interested in receiving.
[WARNING]
.Warning
====
fedmsg does not guarantee message delivery. Messages will be lost and
your application should never depend on the reliable delivery of fedmsgs
to function.
====
=== Topics
==== Existing Topics
There are many existing https://fedora-fedmsg.readthedocs.org/[topics]
in Fedora Infrastructure.
==== New Topics
When creating new message topics, please use the following format:
....
org.fedoraproject.ENV.CATEGORY.OBJECT[.SUBOBJECT].EVENT
....
Where:
____
* `ENV` is one of [.title-ref]#dev#, [.title-ref]#stg#, or
[.title-ref]#production#.
* `CATEGORY` is the name of the service emitting the message --
something like [.title-ref]#koji#, [.title-ref]#bodhi#, or
[.title-ref]#fedoratagger#
* `OBJECT` is something like [.title-ref]#package#, [.title-ref]#user#,
or [.title-ref]#tag#
* `SUBOBJECT` is something like [.title-ref]#owner# or
[.title-ref]#build# (in the case where `OBJECT` is
[.title-ref]#package#, for instance)
* `EVENT` is a verb like [.title-ref]#update#, [.title-ref]#create#, or
[.title-ref]#complete#.
____
All 'fields' in a topic *should*:
____
* Be [.title-ref]#singular# (Use [.title-ref]#package#, not
[.title-ref]#packages#)
* Use existing fields as much as possible (since [.title-ref]#complete#
is already used by other topics, use that instead of using
[.title-ref]#finished#).
____
*Furthermore*, the _body_ of messages will contain the following
envelope:
* A `topic` field indicating the topic of the message.
* A `timestamp` indicating the seconds since the epoch when the message
was published.
* A `msg_id` bearing a unique value distinguishing the message. It is
typically of the form <YEAR>-<UUID>. These can be used to uniquely query
for messages in the datagrepper web services.
* A `crypto` field indicating if the message is signed with the `X509`
method or the `gpg` method.
* An `i` field indicating the sequence of the message if it comes from a
permanent service.
* A `username` field indicating the username of the process that
published the message (sometimes, `apache` or `fedmsg` or something
else).
* Lastly, the application-specific body of the message will be contained
in a nested `msg` dictionary.

View file

@ -15,7 +15,6 @@ new applications should use it unless there is a very good reason not to
do so.
[NOTE]
.Note
====
For historical reasons, you may find applications that don't use Flask.
Other frameworks currently in use include

View file

@ -7,7 +7,7 @@ by an AMQP message broker and the
https://fedora-messaging.readthedocs.io/en/latest/[fedora-messaging] for
Python applications. This documentation outlines the policies for
sending and receiving messages. To learn how to send and receive
messages, see the fedora-messaging documentation.
messages, see the https://fedora-messaging.readthedocs.io/en/latest[fedora-messaging documentation].
=== Broker URLs
@ -16,7 +16,7 @@ through the proxies at `amqps://rabbitmq.fedoraproject.org` for
production and `amqps://rabbitmq.stg.fedoraproject.org` for staging.
Clients can connect using these URLs both inside and outside the Fedora
VPN, but users outside need to use a separate virtual host. Consult the
https://fedora-messaging.readthedocs.io/en/stable/fedora-broker.html[fedora-messaging
https://fedora-messaging.readthedocs.io/en/latest/quick-start.html#fedora-s-public-broker[fedora-messaging
documentation] for details on how to connect externally.
=== Identity
@ -61,7 +61,6 @@ configuration so that if they do not exist, the application will fail
with a helpful error message about which resource is not available.
[WARNING]
.Warning
====
Because AMQP clients don't have permission to create objects, you need
to set

View file

@ -39,11 +39,11 @@ view logs and request debugging containers, but only from batcave01. For
example, to view the logs of a deployment in staging:
....
$ ssh batcave01.phx2.fedoraproject.org
$ oc login os-master01.stg.phx2.fedoraproject.org
$ ssh batcave01.iad2.fedoraproject.org
$ oc login os-master01.stg.iad2.fedoraproject.org
You must obtain an API token by visiting https://os.stg.fedoraproject.org/oauth/token/request
$ oc login os-master01.stg.phx2.fedoraproject.org --token=<Your token here>
$ oc login os-master01.stg.iad2.fedoraproject.org --token=<Your token here>
$ oc get pods
librariesio2fedmsg-28-bfj52 1/1 Running 522 28d
$ oc logs librariesio2fedmsg-28-bfj52
@ -52,12 +52,11 @@ $ oc logs librariesio2fedmsg-28-bfj52
==== Deploying Your Application
Applications are deployed to OpenShift using
https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks/openshift-apps/[Ansible
https://pagure.io/fedora-infra/ansible/blob/main/f/playbooks/openshift-apps[Ansible
playbooks]. You will need to create an
https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps/[Ansible
https://pagure.io/fedora-infra/ansible/blob/main/f/roles/openshift-apps[Ansible
Role] for your application. A role is made up of several YAML files that
define OpenShift
https://docs.openshift.com/container-platform/latest/architecture/core_concepts/index.html[objects].
define OpenShift objects.
To create these YAML objects you have two options:
[arabic]
@ -83,19 +82,19 @@ configurations that won't work.
You will likely need (at a minimum) the following objects:
* A
https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html#builds[BuildConfig]
https://docs.openshift.com/container-platform/4.10/cicd/builds/understanding-buildconfigs.html[BuildConfig]
- This defines how your container is built.
* An
https://docs.openshift.com/container-platform/latest/architecture/core_concepts/builds_and_image_streams.html#image-streams[ImageStream]
https://docs.openshift.com/container-platform/4.10/openshift_images/image-streams-manage.html[ImageStream]
- This references a "stream" of container images and lets you trigger
deployments or image builds based on changes in a stream.
* A
https://docs.openshift.com/container-platform/latest/architecture/core_concepts/deployments.html[DeploymentConfig]
https://docs.openshift.com/container-platform/4.10/applications/deployments/what-deployments-are.html[DeploymentConfig]
- This defines how your container is deployed (how many replicas, what
ports are available, etc)
* A
https://docs.openshift.com/container-platform/latest/architecture/core_concepts/pods_and_services.html#services[Service]
https://docs.openshift.com/container-platform/4.10/applications/connecting_applications_to_services/getting-started-with-service-binding.html[Service]
- An internal load balancer that routes traffic to your pods.
* A
https://docs.openshift.com/container-platform/latest/architecture/networking/routes.html[Route]
https://docs.openshift.com/container-platform/4.10/networking/routes/route-configuration.html[Route]
- This exposes a Service as a host name.

View file

@ -21,7 +21,7 @@ described in https://tools.ietf.org/html/rfc6919[RFC 6919].
=== Static security checking
If written in Python, the application MUST pass
https://github.com/PyCQA/bandit[Bandit] on level medium with default
https://pypi.org/project/bandit/[Bandit] on level medium with default
configuration. Any exclusion lines that appear in the codebase MUST be
sufficiently explained. Code that is only executed during test suite
runs MAY be exempted from this check.

View file

@ -3,21 +3,22 @@
When a new application is deployed in Fedora, it is critical that you
add a standard operating procedure (SOP) for it. This documents how the
application is deployed in Fedora. Consult the current `sops` and if one
application is deployed in Fedora. Consult the current
https://docs.fedoraproject.org/en-US/infra/sysadmin_guide/[sops] and if one
is missing, please add it.
You can modify this documentation or any of the current `sops` by making
a https://docs.pagure.org/pagure/usage/pull_requests.html[pull request]
to the https://pagure.io/infra-docs/[Pagure project].
to the https://pagure.io/infra-docs-fpo[Pagure project].
=== Adding a Standard Operating Procedure
To add a standard operating procedure, create a new
http://www.sphinx-doc.org/en/stable/rest.html[reStructedText] file in
https://asciidoc-py.github.io/userguide.html[AsciiDoc] file in
the
https://pagure.io/infra-docs/blob/master/f/docs/sysadmin-guide/sops[sop
https://pagure.io/infra-docs-fpo/blob/master/f/modules/sysadmin_guide/pages[sop
directory] and then add it to the
https://pagure.io/infra-docs/blob/master/f/docs/sops/index.rst[index
https://pagure.io/infra-docs-fpo/blob/master/f/modules/sysadmin_guide/nav.adoc[index
file].
SOP text file names should use lowercase with dashes. Describe the
@ -28,34 +29,32 @@ service and end the page name with ".rst".
Here's the template for adding a new SOP:
....
=========
SOP Title
=========
= SOP Title
Provide a brief description of the SOP here.
Contact Information
===================
Owner
== Contact Information
Owner::
<usually, Fedora Infrastructure Team>
Contact
Contact::
<stakeholder fas groups, individuals, IRC channels to find the action>
Location
Location::
<Relevant URIs, etc>
Servers
Servers::
<affected machines>
Purpose
Purpose::
<a brief description of the SOPs purpose>
Sections Describing Things
==========================
== Sections Describing Things
Put detailed information in these sections
A Helpful Sub-section
---------------------
=== A Helpful Sub-section
You can even have sub-sections.
A Sub-section of a sub-section
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
==== A Sub-section of a sub-section
....
If a current SOP does not follow this general template, it should be
@ -63,58 +62,45 @@ updated to do so.
=== SOP Formatting
SOPs are written in ReStructuredText. To learn about the spec, read:
SOPs are written in https://asciidoc-py.github.io/userguide.html[AsciiDoc].
To learn about the spec, read:
* http://docutils.sourceforge.net/docs/user/rst/quickstart.html[Quickstart]
* http://docutils.sourceforge.net/docs/user/rst/quickref.html[Quick
* https://docs.asciidoctor.org/asciidoc/latest/syntax-quick-reference/[Quick
references]
* http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html[Full
* https://asciidoc-py.github.io/userguide.html[Full
Specification]
* http://www.sphinx-doc.org/en/stable/rest.html[Sphinx reStructuredText]
The format is somewhat simple if you remember a few key points:
* Sections are delineated by underlined texts. The convention is:
** Title has "=" above and below the title text, at least as many
columns as the title itself.
** Top level sections are underlined by "===" - at least as many columns
as the section title in the line above.
** Second level sections are underlined by "---"
** Any of
`! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ \` { | } ~` are
valid section delineators. If you need more than two section levels,
choose between them but be sure to be consistent.
* Indents are significant. Only indent for things like block quotes,
nested lists etc. Match the tabstop of the document you are editing.
Make note of the indentation level as you nest lists.
* Use literal blocks for code and command sections. An indented section
found after `::` and a newline will be processed as a literal block.
* Sections are prefixed by "=". The convention is that the number of "="
corresponds to level of section starting with title with only one.
* Block like `Note` are delimited by "...." bellow and above the text that
should be in the note.
* Use literal blocks for code and command sections.
Add "...." above and bellow the block.
The "...." is not the only delimiter you can use,
there are https://asciidoctor.org/docs/asciidoc-writers-guide/#delimited-blocks=[multiple others].
Like this:
+
....
----
Literal blocks can be nested into lists (think a numbered sequence of steps)
1. Log into the thing
. Log into the thing
2. run a command::
this indented relative to the first content column of the list
so it is a block quote
This line begins at the first content column of the list,
so it is considered a continuation of the list content.
3. Log out of the thing.
. Run following commands
+
....
This text will be a literal block
....
. Log out of the thing.
----
* For inline literals (commands, filenames, anything that wouldn't make
sense if translated, use your judgement) use double backticks, like
sense if translated, use your judgement) use backticks, like
this:
+
....
You should specify your Fedora username and ssh key in ``~/.ssh/config`` to make connecting
You should specify your Fedora username and ssh key in `~/.ssh/config` to make connecting
better.
....
* If nesting and mixing content types, use newlines liberally. A bullet
list doesn't need newlines, but if a list item's content spans more than
one line, a newline _is_ required. If a list is nested, the top level
list should have newlines between list members.
* When adding block in list use `+` between them for the correct indentation.

View file

@ -2,17 +2,18 @@
Tests make development easier for both veteran project contributors and
newcomers alike. Most projects use the
https://docs.python.org/3.6/library/unittest.html[unittest] framework
https://docs.python.org/3.10/library/unittest.html[unittest] framework
for tests so you should familiarize yourself with this framework.
[NOTE]
.Note
====
Writing tests can be a great way to get involved with a project. It's an
opportunity to get familiar with the codebase and the code submission
and review process. Check the project's code coverage and write a test
for a piece of code missing coverage!
====Patches should be accompanied by one or more tests to demonstrate
====
Patches should be accompanied by one or more tests to demonstrate
the feature or bugfix works. This makes the review process much easier
since it allows the reviewer to run your code with very little effort,
and it lets developers know when they break your code.
@ -29,7 +30,7 @@ the package they test. That is, if the package contains the
`<package>/server/push.py` module, the test module should be in a module
called `<test_root>/server/test_push.py`.
. Within each test module, follow the
https://docs.python.org/3.6/library/unittest.html#organizing-test-code[unittest
https://docs.python.org/3.10/library/unittest.html#organizing-test-code[unittest
code organization guidelines].
. Include documentation blocks for each test case that explain the goal
of the test.
@ -39,24 +40,25 @@ code that makes HTTP requests, consider using
https://pypi.python.org/pypi/vcrpy[vcrpy].
[NOTE]
.Note
====
You may find projects that do not follow this test layout. In those
cases, consider re-organizing the tests to follow the layout described
here and follow the established conventions for that project until that
happens.
======= Test Runners
====
=== Test Runners
Projects should include a way to run the tests with ease locally and the
steps to run the tests should be documented. This should be the same way
the continuous integration (Jenkins, TravisCI, etc.) tool runs the
the continuous integration (Jenkins, Zuul CI, etc.) tool runs the
tests.
There are many test runners available that can discover
https://docs.python.org/3.6/library/unittest.html[unittest] based tests.
https://docs.python.org/3.10/library/unittest.html[unittest] based tests.
These include:
* https://docs.python.org/3.6/library/unittest.html[unittest] itself via
* https://docs.python.org/3.10/library/unittest.html[unittest] itself via
`python -m unittest discover`
* http://docs.pytest.org/en/latest/contents.html[pytest]
* http://nose2.readthedocs.io/en/latest/[nose2]
@ -64,7 +66,6 @@ These include:
Projects should choose whichever runner best suits them.
[NOTE]
.Note
====
You may find projects using the
https://nose.readthedocs.io/en/latest/[nose] test runner. nose is in
@ -73,7 +74,9 @@ cease without a new maintainer. They recommend using
https://docs.python.org/3.6/library/unittest.html[unittest],
http://docs.pytest.org/en/latest/contents.html[pytest], or
http://nose2.readthedocs.io/en/latest/[nose2].
====[[tox-config]]
====
[[tox-config]]
=== Tox
https://pypi.python.org/pypi/tox[Tox] is an easy way to run your
@ -85,7 +88,7 @@ project's documentation builds without errors or warnings.
Here's an example `tox.ini` file that runs a project's unit tests in
Python 2.7, Python 3.4, Python 3.5, and Python 3.6. It also runs
https://pypi.python.org/pypi/flake8[flake8] on the entire codebase and
builds the documentation with the "warnings treated as errors" Sphinx
builds the documentation with the `warnings treated as errors` Sphinx
flag enabled. Finally, it enforces 100% coverage on lines edited by new
patches using https://pypi.org/project/diff-cover/[diff-cover]:
@ -186,14 +189,13 @@ of the Python package you are measuring coverage on:
addopts = --cov-config .coveragerc --cov=yourpackage --cov-report term --cov-report xml --cov-report html
....
causes coverage (and any test running plugins using coverage) to fail if
This causes coverage (and any test running plugins using coverage) to fail if
the coverage level is not 100%. New projects should enforce 100% test
coverage. Existing projects should ensure test coverage does not drop to
accept a pull request and should increase the minimum test coverage
until it is 100%.
[NOTE]
.Note
====
https://pypi.python.org/pypi/coverage/[coverage] has great
https://coverage.readthedocs.io/en/coverage-4.3.4/excluding.html[exclusion]
@ -202,7 +204,9 @@ functions, classes, and whole source files from your coverage report. If
you have code that doesn't make sense to have tests for, you can exclude
it from your coverage report. Remember to leave a comment explaining why
it's excluded!
======= Licenses
====
=== Licenses
The https://pypi.org/project/liccheck/[liccheck] checker can verify that
every dependency in your project has an acceptable license. The