Skip to content

Developing on the server

You want to work on Panoramax and offer bug fixes or new features ? That's awesome ! 🤩

Here are some inputs about working with Panoramax API code.

If something seems missing or incomplete, don't hesitate to contact us by email or using an issue. We really want Panoramax to be a collaborative project, so everyone is welcome (see our code of conduct).

Documentation

Documenting things is important ! 😎 We have three levels of documentation in the API repository:

  • Code itself is documented with Python Docstrings
  • HTTP API is documented using OpenAPI 3
  • Broader documentation on requirements, install, config (the one you're reading) using Markdown and Mkdocs

Code documentation

Code documentation is done using docstrings. You can check out the doc in your favorited IDE, or with Python:

import geovisio
help(geovisio)

API documentation

API documentation is automatically served from API itself. You can run it locally by running API:

flask run

Then access it through localhost:5000/api/docs/swagger.

The API doc is generated from formatted code comments using Flasgger. You're likely to find these comments in:

  • geovisio/web/docs.py: for data structures and third-party specifications
  • geovisio/web/*.py: for specific routes parameters

If you're changing the API, make sure to add all edited parameters and new routes in docs so users can easily understand how Panoramax works.

General documentation (Mkdocs)

General documentation is available in the docs folder of the repository. You can read it online, or access it locally:

# Install dependencies
pip install -e .[docs]

# Generate swagger doc, and run mkdocs with a local server
make serve-doc

Make sure to keep it updated if you work on new features.

Testing

We're trying to make Panoramax as reliable and secure as possible. To ensure this, we rely heavily on code testing.

Unit tests (Pytest)

Unit tests ensure that small parts of code are working as expected. We use the Pytest solution to run unit tests.

You can run tests by following these steps:

  • In an environment variable, or a test.env dot file, add a DB_URL parameter, which follows the DB_URL parameter format, so you can use a dedicated database for testing
  • Run pytest command

Unit tests are available mainly in /tests/ folder, some simpler tests are directly written as doctests in their respective source files (in /geovisio).

If you're working on bug fixes or new features, please make sure to add appropriate tests to keep Panoramax level of quality.

Note that tests can be run using Docker with following commands:

# All tests (including heavy ones)
docker-compose \
    run --rm --build \
    -e DB_URL="postgres://gvs:gvspwd@db/geovisio" \
    backend test  # Replace test by test-ci for only running lighter tests

Also note that Pytest tests folders are cleaned-up after execution, temporary files only exist during test running time.

STAC API conformance

Third-party tool STAC API Validator is used to ensure that Panoramax API is compatible with STAC API specifications. It is run automatically on our tests and Gitlab CI:

pytest -vv tests/fixed_data/test_api_conformance.py

Note: you need to install the dependencies for this:

pip install -e .[api-conformance]

Code format

Before opening a pull requests, code need to be formatted with black.

Install development dependencies:

pip install -e .[dev]

Format sources:

black .

You can also install git pre-commit hooks to format code on commit with:

pre-commit install

Translation

Translations are managed with Flask Babel, which relies on a classic Python gettext mechanism. They are stored under geovisio/translations/ directory.

Only a few parts of the API needs internationalization, particularly:

  • RSS feed of sequences (/api/collections?format=rss)
  • HTML templates (default API pages)
  • Various errors or warnings labels returned in HTTP responses

If you'd like to translate some string in Python code, you can do the following:

from flask_babel import gettext as _
...
print(_("My text is %(mood)s", mood="cool"))

For HTML/Jinja templates, you can use this syntax (or any of these ones):

<p>{%trans%}My text to translate{%endtrans%}</p>

To extract all strings into POT catalog, you can run this command:

make i18n-code2pot

Translation itself is managed through our Weblate instance, you can go there and start translating or create a new language.

To compile translated PO files into MO files, you can run this command:

make i18n-po2code

Note that if you add support for a new language, you may enable it in geovisio/__init__.py in this function:

def get_locale():
    return request.accept_languages.best_match(['fr', 'en'])

Database

Connection pool

When possible, prefer using a connection from the connection pool instead of creating one.

To acquire a connection to the database, use the context manager, this way the connection will be freed after use:

from geovisio.utils import db

with db.conn(current_app) as conn:
    r = conn.execute("SELECT * FROM some_table", []).fetchall()

You can check the geovisio.utils.db module for more helpers.

Note

Those connections have a statement timeout (by default 5 minutes) to avoid very long queries blocking the backend. If a specific query is known to be very long, a connection without this timeout can be aquired as:

from geovisio.utils import db

with db.long_queries_conn(current_app) as conn:
    ...

Adding a new migration

To create a new migration, use yoyo-migrations.

The yoyo binary should be available if the Python dependencies are installed.

The preferred way to create migration is to use raw SQL, but if needed a Python migration script can be added.

yoyo new -m "<a migration name>" --sql

(remove the --sql to generate a Python migration).

This will open an editor to a migration in ./migrations.

Once saved, for SQL migrations, always provide another file named like the initial migration but with a .rollback.sql suffix, with the associated rollback actions.

Note: each migration is run inside a transaction.

Updating large tables

When performing a migration that update a potentially large table (like pictures or pictures_sequence, that can contains tens of millions rows), we don't want to lock the whole table for too long since it would cause downtime on the instance.

So when possible, the migration of a column should be written in batch (and as a best effort, the code should work on the updated or non updated table if possible).

The migration of pictures table can for example be written like:

CREATE OR REPLACE PROCEDURE update_all_pictures_with_important_stuff() AS
$$
DECLARE
   last_inserted_at TIMESTAMPTZ;
BEGIN
    SELECT min(inserted_at) - INTERVAL '1 minute' FROM pictures INTO last_inserted_at;

       WHILE last_inserted_at IS NOT NULL LOOP

        -- Temporary removal of all update triggers that needs to be removed, be sure to check all the update trigger, and see which should be deactivated
        DROP TRIGGER pictures_update_sequences_trg ON pictures;
        DROP TRIGGER pictures_updates_on_sequences_trg ON pictures;

        WITH 
            -- get a batch of 100 000 pictures to update
            pic_to_update AS (
                SELECT id, inserted_at from pictures where inserted_at > last_inserted_at ORDER BY inserted_at ASC LIMIT 100000
            )
            , updated_pic AS (
                UPDATE pictures 
                    SET important_stuff = 'very_important' -- do real update here
                    WHERE id in (SELECT id FROM pic_to_update)
            )
            SELECT MAX(inserted_at) FROM pic_to_update INTO last_inserted_at;

       RAISE NOTICE 'max insertion date is now %', last_inserted_at;


        -- Restore all deactivated triggers
        CREATE TRIGGER pictures_updates_on_sequences_trg
        AFTER UPDATE ON pictures
        REFERENCING NEW TABLE AS pictures_after
        FOR EACH STATEMENT
        EXECUTE FUNCTION pictures_updates_on_sequences();

        CREATE TRIGGER pictures_update_sequences_trg
        AFTER UPDATE ON pictures
        REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table
        FOR EACH STATEMENT EXECUTE FUNCTION pictures_update_sequence();

        -- commit transaction (as a procedure is in an implicit transaction, it will start a new transaction after this)
        COMMIT;
   END LOOP;
   RAISE NOTICE 'update finished';
END
$$  LANGUAGE plpgsql;

CALL update_all_pictures_with_important_stuff();
DROP PROCEDURE update_all_pictures_with_important_stuff;

The migrations pictures-exiv2 and jobs-error are real case examples of this.

Updating an instance database schema

Migrations are technically handled by yoyo-migrations.

For advanced schema handling (like listing the migrations, replaying a migration, ...) you can use all yoyo's command.

For example, you can list all the migrations:

yoyo list --database postgresql+psycopg://user:pwd@host:port/database

Note: the database connection string should use postgresql+psycopg:// in order to force yoyo to use Psycopg v3.

Keycloak

To work on authentication functionalities, you might need a locally deployed Keycloak server.

To spawn a configured Keycloak, run:

docker-compose -f docker/docker-compose-keycloak.yml up

And wait for Keycloak to start.

âš  beware that the configuration is not meant to be used in production!

Then, provide the following variables to your local Panoramax API (either in a custom .env file or directly as environment variables, as stated in the corresponding documentation section).

OAUTH_PROVIDER='oidc'
FLASK_SECRET_KEY='some secret key'
OAUTH_OIDC_URL='http://localhost:3030/realms/geovisio'
OAUTH_CLIENT_ID="geovisio"
OAUTH_CLIENT_SECRET="what_a_secret"

Make a release

See dedicated documentation.