Skip to content

RedHatInsights/insights-host-inventory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Host Based Inventory (HBI)

You've arrived at the repo for the backend of the Host Based Inventory (HBI). If you're looking for API, integration or user documentation for HBI please see the Inventory section in our Platform Docs site.

Table of contents

Getting started

Prerequisites

Before starting, ensure you have the following installed on your system:

  • Docker: For running containers and services.
  • Python 3.12.x: The recommended version for this project.
  • pipenv: For managing Python dependencies.

Environment setup

PostgreSQL configuration

Local development also requires the pg_config file, which is installed with the postgres developer library. To install this, use the command appropriate for your system:

Fedora/Centos
sudo dnf install libpq-devel postgresql python3.12-devel
Debian/Ubuntu
sudo apt-get install libpq-dev postgresql python3.12-dev
MacOS (using Homebrew)
brew install postgresql@16

Environment variables

Create a .env file in your project root with the following content. Replace placeholders with appropriate values for your environment.

cat > ${PWD}/.env<<EOF
# RUNNING HBI Locally
PROMETHEUS_MULTIPROC_DIR=/tmp
BYPASS_RBAC="true"
BYPASS_UNLEASH="true"
# Optional legacy prefix configuration
# PATH_PREFIX="/r/insights/platform"
APP_NAME="inventory"
INVENTORY_DB_USER="insights"
INVENTORY_DB_PASS="insights"
INVENTORY_DB_HOST="localhost"
INVENTORY_DB_NAME="insights"
INVENTORY_DB_POOL_TIMEOUT="5"
INVENTORY_DB_POOL_SIZE="5"
INVENTORY_DB_SSL_MODE=""
INVENTORY_DB_SSL_CERT=""
UNLEASH_TOKEN='*:*.dbffffc83b1f92eeaf133a7eb878d4c58231acc159b5e1478ce53cfc'
UNLEASH_CACHE_DIR=./.unleash
UNLEASH_URL="http://localhost:4242/api"
# Kafka Export Service Configuration
KAFKA_EXPORT_SERVICE_TOPIC="platform.export.requests"
EOF

After creating the file, source it to set the environment variables:

source .env

Create virtual environment

  1. Install dependencies:
pipenv install --dev
  1. Activate virtual environment:
pipenv shell

Create database data directory

Provide a local directory for database persistence:

mkdir ~/.pg_data

If using a different directory, update the volumes section in dev.yml.

Start dependent services

All dependent services are managed by Docker Compose and are listed in the dev.yml file. Start them with the following command:

docker compose -f dev.yml up -d

By default, the database container will use a bit of local storage so that data you enter will persist across multiple starts of the container. If you want to destroy that data do the following:

docker compose -f dev.yml down
rm -r ~/.pg_data # or a another directory you defined in volumes

Run database migrations

make upgrade_db

Run the service

  1. Run the MQ Service:
make run_inv_mq_service
  • Note: You may need to add a host entry for Kafka:
echo "127.0.0.1   kafka" | sudo tee -a /etc/hosts
  1. Create Hosts Data:
make run_inv_mq_service_test_producer NUM_HOSTS=800
  • By default, it creates one host if NUM_HOSTS is not specified.
  1. Run the Export Service:
pipenv shell
make run_inv_export_service

In another terminal, generate events for the export service with:

make sample-request-create-export

By default, it will send a json format request. To modify the data format, use:

make sample-request-create-export format=[json|csv]

Testing

You can run the tests using pytest:

pytest --cov=.

Or run individual tests:

# To run all tests in a specific file:
pytest tests/test_api_auth.py
# To run a specific test
pytest tests/test_api_auth.py::test_validate_valid_identity
  • Note: Ensure DB-related environment variables are set before running tests.

Running the webserver locally

Prometheus was designed to run in a multithreaded environment whereas gunicorn uses a multiprocess architecture. As a result, there is some work to be done to make prometheus integrate with gunicorn.

A temp directory for prometheus needs to be created before the server starts. The PROMETHEUS_MULTIPROC_DIR environment needs to point to this directory. The contents of this directory need to be removed between runs.

If running the server in a cluster, you can use this command:

gunicorn -c gunicorn.conf.py run

When running the server locally for development, the Prometheus configuration is done automatically. You can run the server locally using this command:

python3 run_gunicorn.py

Running all services locally

Use Honcho to run MQ and web services at once:

honcho start

Legacy support

Some apps still need to use the legacy API path, which by default is /r/insights/platform/inventory/v1/. In case legacy apps require this prefix to be changed, it can be modified using this environment variable:

export INVENTORY_LEGACY_API_URL="/r/insights/platform/inventory/api/v1"

Identity

API Requests

When testing the API, set the identity header in curl:

x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiJ0ZXN0IiwidHlwZSI6IlVzZXIiLCJhdXRoX3R5cGUiOiJiYXNpYy1hdXRoIiwidXNlciI6eyJ1c2VybmFtZSI6InR1c2VyQHJlZGhhdC5jb20iLCJlbWFpbCI6InR1c2VyQHJlZGhhdC5jb20iLCJmaXJzdF9uYW1lIjoidGVzdCIsImxhc3RfbmFtZSI6InVzZXIiLCJpc19hY3RpdmUiOnRydWUsImlzX29yZ19hZG1pbiI6ZmFsc2UsImlzX2ludGVybmFsIjp0cnVlLCJsb2NhbGUiOiJlbl9VUyJ9fX0=

This is the Base64 encoding of:

{
  "identity": {
    "org_id": "test",
    "type": "User",
    "auth_type": "basic-auth",
    "user": {
      "username": "tuser@redhat.com",
      "email": "tuser@redhat.com",
      "first_name": "test",
      "last_name": "user",
      "is_active": true,
      "is_org_admin": false,
      "is_internal": true,
      "locale": "en_US"
    }
  }
}

The above header has the "User" identity type, but it's possible to use a "System" type header as well.

x-rh-identity: eyJpZGVudGl0eSI6eyJvcmdfaWQiOiAidGVzdCIsICJhdXRoX3R5cGUiOiAiY2VydC1hdXRoIiwgInN5c3RlbSI6IHsiY2VydF90eXBlIjogInN5c3RlbSIsICJjbiI6ICJwbHhpMTN5MS05OXV0LTNyZGYtYmMxMC04NG9wZjkwNGxmYWQifSwidHlwZSI6ICJTeXN0ZW0ifX0=

This is the Base64 encoding of:

{
  "identity": {
    "org_id": "test",
    "auth_type": "cert-auth",
    "system": {
      "cert_type": "system",
      "cn": "plxi13y1-99ut-3rdf-bc10-84opf904lfad"
    },
    "type": "System"
  }
}

If you want to encode other JSON documents, you can use the following command:

echo -n '{"identity": {"org_id": "0000001", "type": "System"}}' | base64 -w0

Kafka Messages

For Kafka messages, the Identity must be set in the platform_metadata.b64_identity field.

Identity Enforcement

The Identity provided limits access to specific hosts. For API requests, the user can only access Hosts which have the same Org ID as the provided Identity. For Host updates via Kafka messages, A Host can only be updated if not only the Org ID matches, but also the Host.system_profile.owner_id matches the provided identity.system.cn value.

Payload Tracker integration

The inventory service integrates with the Payload Tracker service. Configure it using these environment variables:

KAFKA_BOOTSTRAP_SERVERS=localhost:29092
PAYLOAD_TRACKER_KAFKA_TOPIC=platform.payload-status
PAYLOAD_TRACKER_SERVICE_NAME=inventory
PAYLOAD_TRACKER_ENABLED=true
  • Enabled: Set PAYLOAD_TRACKER_ENABLED=false to disable the tracker.
  • Usage: The tracker logs success or errors for each payload operation. For example, if a payload contains multiple hosts and one fails, it's logged as a "processing_error" but doesn't mark the entire payload as failed.

Database Migrations

Generate new migration scripts with:

make migrate_db message="Description of your changes"
  • Replicated Tables: If your migration affects replicated tables, ensure you create and apply migrations for them first. See app_migrations/README.md for details.

Schema Dumps (for replication subscribers)

Capture the current HBI schema state with:

make gen_hbi_schema_dump
  • Generates a SQL file in app_migrations named hbi_schema_<YYYY-MM-dd>.sql.
  • Creates a symbolic link hbi_schema_latest.sql pointing to the latest dump.

Note: Use the optional SCHEMA_VERSION variable to customize the filename.

Docker Builds

Build local development containers with:

docker build . -f dev.dockerfile -t inventory:dev
  • Note: Some packages require a subscription. Ensure your host has access to valid RHEL content.

Metrics

Prometheus integration provides monitoring endpoints:

  • /health: Liveness probe endpoint.
  • /metrics: Prometheus metrics endpoint.
  • /version: Returns build version info.

Cron jobs (reaper, sp-validator) push metrics to a Prometheus Pushgateway at PROMETHEUS_PUSHGATEWAY (default: localhost:9091).

Release process

This section describes the process of getting a code change from a pull request all the way to production.

1. Pull request

It all starts with a pull request. When a new pull request is opened, some jobs are run automatically. These jobs are defined in app-interface here.

Should any of these fail this is indicated directly on the pull request.

When all of these checks pass and a reviewer approves the changes the pull request can be merged by someone from the @RedHatInsights/host-based-inventory-committers team.

2. Latest image and smoke tests

When a pull request is merged to master, a new container image is built and tagged as insights-inventory:latest. This image is then automatically deployed to the Stage environment.

3. QE testing in the stage environment

Once the image lands in the Stage environment, the QE testing can begin. People in @team-inventory-dev run the full IQE test suite against Stage, and then report the results in the #team-insights-inventory channel.

4. Promoting the image to the production environment

In order to promote a new image to the production environment, it is necessary to update the deploy-clowder.yml file. The ref parameter on the prod-host-inventory-prod namespace needs to be updated to the SHA of the validated image.

Once the change has been made, submit a merge request to app-interface. For the CI pipeline to run tests on your fork, you'll need to add @devtools-bot as a Maintainer. See this guide on how to do that.

After the MR has been opened, somebody from AppSRE/insights-host-inventory will review and approve the MR by adding a /lgtm comment. Afterward, the MR will be merged automatically and the changes will be deployed to the production environment. The engineer who approved the MR is then responsible for monitoring of the rollout of the new image.

Once that happens, contact @team-inventory-dev and request the image to be re-tested in the production environment. The new image will also be tested automatically when the Full Prod Check pipeline is run (twice daily).

5. Monitoring of deployment

It is essential to monitor the health of the service during and after the production deployment. A non-exhaustive list of things to watch includes:

  • Monitor deployment in:

Rollback process

Should unexpected problems occur during the deployment, it is possible to do a rollback. This is done by updating the ref parameter in deploy-clowder.yml to point to the previous commit SHA, or by reverting the MR that triggered the production deployment.

Updating the System Profile

In order to add or update a field on the System Profile, first follow the instructions in the inventory-schemas repo. After an inventory-schemas PR has been accepted and merged, HBI must be updated to keep its own schema in sync. To do this, simply run this command:

make update-schema

This will pull the latest version of the System Profile schema from inventory-schemas and update files as necessary. Open a PR with these changes, and it will be reviewed and merged as per the standard process.

Logging System Profile Fields

Use the environment variable SP_FIELDS_TO_LOG to log the System Profile fields of a host. These fields are logged when adding, updating or deleting a host from inventory.

SP_FIELDS_TO_LOG="cpu_model,disk_devices"

This logging helps with debugging hosts in Kibana.

Running ad hoc jobs using a different image

There may be a job ClowdJobInvocation which requires using a special image that is different from the one used by the parent application, i.e. host-inventory. Clowder out-of-the-box does not allow it. Running a Special Job describes how to accomplish it.

Debugging local code with services deployed into Kubernetes namespaces

Making local code work with the services running in Kubernetes requires some actions provided here.

Contributing

Pre-commit Hooks

The repository uses pre-commit to enforce code style. Install pre-commit hooks:

pre-commit install

If inside the Red Hat network, also ensure rh-pre-commit is installed as per instructions here.

About

No description or website provided.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 68